This whole article started as an attempt at sharing steps to get a free “no-cloud” platform for continuous integration and continuous deployment. What triggered it? The time I spent doing npm install, npm build, npm publish, left and right, forgetting one, npm test, oops I forgot one npm test and did a docker build … Damned! was that time consuming and lousy activities.
I want to code. I don’t want to do this. Let’s separate the concern: I code, the server does the rest. Sounds good? We call this separation of concerns, at a whole new level!
What is separation of concerns?
Short answer: Do what you are supposed to do and do it right.
Not so short answer: It is the principle of breaking down a complex system into distinct, independent parts or modules, each addressing a specific responsibility or concern, to promote modularity, maintainability, and scalability. It is usually applied at many (if not all) levels, like architecture, component, class, method, data, presentation, testing, infrastructure, deployment, etc.
… and why should you care? (Why it matters)
My article in itself? It doesn’t really matter, and you shouldn’t care. Unless, that is, you find yourself in this situation where you hear about continuous integration and deployment, but you don’t really know where to start. Or if you have your own software you’re working on and want to get to a next level. Or just because you like reading my stuff, who knows!
I recently started to flip this blog into a melting pot of everything I face on this personal project. Eventually, I may even release something! And then, we can have a nice history of how it got there! For the posterity!
Anyway, I am diverging from the original intent… I want to share the journey I went through to get a working environment with Jenkins and Verdaccio. I think it is a great platform for startups who can’t or won’t afford cloud hosting just yet (or for privacy reasons) but still want to achieve some level of automation.
As a reference, I’m sharing the challenges I am facing with a personal project consisting of a Node backend, a React frontend, and a background server, and how I tackle these challenges using modern tools and techniques.
I want to try something backward. Conclusion first! I completed the setup, and it works. It was painful, long, and not fun at some points.
But look at the above screenshot! Every time I push to one of my Github repo, it triggers build and test. In one case, it even publishes to my private package management registry with the :develop tag! Isn’t it magical?
If you are interested in how I got there, tag along. Otherwise, have a great day! (still, you should meet my new friend, Jenkins)
Before we begin, here are some definitions (if you know what these are, just ignore).
You never know who will read your articles (if anyone). Play dumb.
Definitions
Continuous Integration (CI) and Continuous Deployment (CD): CI/CD are practices that automate the process of building, testing, and deploying software applications. CI ensures that code changes from multiple developers are regularly merged and tested, while CD automates the deployment of these tested changes to production or staging environments.
Node.js: Node.js is a runtime environment that allows developers to run JavaScript code outside of a web browser. It’s commonly used for building server-side applications, APIs, and real-time applications.
Docker: Docker is a platform that simplifies the process of creating, deploying, and running applications using containers. Containers are lightweight, standalone executable packages that include everything needed to run an application, including the code, runtime, system tools, and libraries.
Containers: Containers are isolated environments that package an application and its dependencies together. They provide a consistent and reproducible runtime environment, ensuring that the application runs the same way regardless of the underlying infrastructure. Containers are lightweight and portable, making them easier to deploy and manage than traditional virtual machines.
Let’s Begin!
Project Structure
- Project 1: React Frontend
- Project 2: Common backend (most models, some repos and some services)
- Project 3: Node Backend (API)
- Project 4: Node Worker (processing and collection monitoring)
Environments
- I run development on whatever machine I am using at the moment with nodemon and vite development mode
- I build & run Docker containers on my linux server with docker compose (3 Docker files and 1 docker compose)
- I have a nginx reverse proxy on the same server for SSL and dynamic IP (no-ip)
Objective
- Achieve full CI/CD so I can onboard remote team members (and do what I like: code!)
IF I am successful, this will be a robust and scalable development workflow that would streamline the entire software development life cycle, from coding to deployment, for my project. I think in the enterprise, with similar tools, this proactive approach would lay a good foundation for efficient collaboration, rapid iteration, and reliable software delivery, ultimately reducing time-to-market and increasing overall productivity.
Couple of additional challenges
- Challenge #1: Private Package Management. NPM Registry is public. I want to keep my projects private; how can I have a package for the common backend components?
- Challenge #2: Continuous Integration (CI). How can I implement a continuous integration pipeline with this? Code is on Github, registry will be private… How do I do that?
- Challenge #3: Continuous Deployment (CD). How can I implement a continuous deployment process? I will need to automate some testing and quality gates in the process, so how will that work?
- Challenge #4: Future Migration to Continuous Delivery (CD+). Can I migrate to continuous delivery in the future (you know, if I ever have customers?)
- Challenge #5: Cloud Migration / readiness. When / if this becomes serious, can my solution be migrated to a cloud provider to reduce hardware failure risk?
With this in mind, I have decided to embark on a journey to attempt to setup a stable environment to address achieve this and face each challenge. Will you take the blue pill and go back to your routine, or the red pill and follow me down the rabbit hole?..
Starting point: bare metal (target: Ubuntu Server, Jenkins, Verdaccio, Docker)
I have this Trigkey S5 miniPC which I think is pretty awesome for the price. It comes with Windows 11, but from what I read everywhere, to host what I want, I should go with a linux distro. So there I go, I install some linux distro from a USB key and cross fingers it boots…
I went with Ubuntu Server (24.04 LTS). BTW, the miniPC is a Ryzen 5800H with 32GB RAM, should be sufficient for a while. On there, I have the installed these – pretty straightforward and you can find many tutorials online, so I won’t go in details:
- Docker (engine & compose)
- Git
- Cockpit (this makes it easier for me to manage my server)
I also have a NGINX reverse proxy container. You can google something like nginx reverse proxy ssl letsencrypt docker
and you’ll find great tutorials on setting this up as well. I may write another article later when I reach that point for some items (if required in this journey). But really, that’s gravy at this stage.
Install and Configure Jenkins
Jenkins is an open-source automation server that provides CI/CD pipeline solutions. From what I could gather, we can use Jenkins Docker image for easy management and portability, and it strikes a good balance between flexibility and simplicity. Disclaimer: it is my first experiment with Jenkins, so I have no clue how this will come out…
1. Let’s first prepare some place to store Jenkins’ data:
sudo mkdir /opt/jenkins
2. Download the Docker image:
sudo docker pull jenkins/jenkins:lts
3. Created a docker-compose.yml
file to run jenkins:
services:
jenkins:
image: jenkins/jenkins:lts
container_name: jenkins
ports:
- 8080:8080
- 50000:50000
volumes:
- ./jenkins:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
restart: always
environment:
- DOCKER_HOST=unix:///var/run/docker.sock
4. And launched it: sudo compose up -d
Et voilà! Jenkins seems to be running:
5. Since I mounted ./jenkins as my jenkins home, I can just run cat jenkins/secrets/initialAdminPassword
and I can get the initial admin password, and continue. (for some reasons, I had to paste and click continue twice, then it worked)
I went with the recommended plugins to begin with. According to the documentation, we can easily add more later.
Install and Configure Verdaccio
Verdaccio will be my private package management registry. To install it, I just created a docker compose file, setup some directories, and boom.
version: '3.8'
services:
verdaccio:
image: verdaccio/verdaccio
container_name: verdaccio
ports:
- "4873:4873"
volumes:
- ./config:/verdaccio/conf
- ./storage:/verdaccio/storage
- ./plugins:/verdaccio/plugins
- ./packages:/verdaccio/packages
Run it with sudo docker compose up -d
and that’s it.
Let’s put all of this together and create our first pipeline! – inspired from Build a Node.js and React app with npm (jenkins.io)
Problem 1 – Github & Jenkins ssh authentication
Well, I was not prepared for this. I spent a lot of time, but since Jenkins runs in a container, it does not share everything with the host, and somehow, adding the private keys were not adding the known hosts. So I had to run these commands:
me@server:~/jenkins$ sudo docker exec -it jenkins bash
jenkins@257100f0320f:/$ git ls-remote -h [email protected]:michelroberge/myproject.git HEAD
The authenticity of host 'github.com (140.82.113.3)' can't be established.
ED25519 key fingerprint is SHA256:+...
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'github.com' (ED25519) to the list of known hosts.
After that, this worked. It is only later I found that I need to pass –legacy-auth to the npm login when in headless mode. Moving forward, that won’t be a problem anymore.
Problem 2 – npm not found
Who would have thought. Node is not installed by default, you need to add the plugin. Once added, a typical workflow will need to include it! Something like:
pipeline {
agent any
tools {nodejs "node"}
stages {
stage('Install') {
steps {
sh 'npm install'
}
}
stage('Build Library') {
steps {
sh 'npm run build:lib'
}
}
stage('Build Application') {
steps {
sh 'npm run build:app'
}
}
}
}
the tools
section is what matters. I have named my NodeJs installation node, hence the “node” in the name. Now that I played with it, I understand: this allows me to have different node versions and use the one I want in the workflow I want. Neat!
And finally, I got something happening:
First step achieved! Now I can add npm run test
to have my unit tests running automatically.
This is nice, but it is not triggered automatically. Since I use Github, I can leverage the webhooks through the github trigger:
Then all I need is to put a webhook trigger in github that will point to https://<my-public-address-for-jenkins>:<port>/github-hook/
and that’s it!
The result
With this, I can now build a fully automated CI pipeline. Now, what is fully automated? That’s where the heavy-lifting begins. I will be exploring and reading about it more in the next weeks, but ultimately, I want to automate this:
- Develop branch CI – when I push anything to the develop branch, run the following tasks:
- pull from develop
- install (install dependencies)
- build (build the repo)
- test (run the unit tests, API tests, e2e tests, etc. depending on the repo)
- Staging branch CD – when I do a pull request from develop into staging branch, run the following tasks:
- pull from staging
- install
- build
- test (yes, same again)
- host in testing environment (docker)
- load tests (new suite of test to verify response under heavy load)
- I will then do “health checks”, analyze, and decide if I can/should do a pull from staging into main.
- Main branch CD – when I do a pull request from staging into main, run the following tasks:
- pull from main
- install
- build
- test (of course!)
- host in staging environment (docker)
- do some check, and then swap spot with current production docker
- take down the swapped docker
The reason I keep some manual tasks (step 3) is because I want to handle build candidates in a kind of “the old way”. When I introduce some additional testing automation suites, I will probably enhance the whole thing.
By implementing these automated CI/CD workflows, I hope to achieve the following benefits:
- Faster Feedback Cycles: Automated testing and deployment processes provide rapid feedback on code changes, allowing developers to quickly identify and resolve issues. I hope I won’t be the only developer forever on this project!
- Early Detection of Issues: Continuous integration and testing catch defects early in the development cycle, preventing them from propagating to later stages and reducing the cost of fixing them.
- Efficient and Reliable Deployments: Automated deployment processes ensure consistent and repeatable deployments, reducing the risk of human errors and minimizing downtime.
- Improved Collaboration: Automated workflows facilitate collaboration among team members by providing a standardized and streamlined development process.
This is also something that will help me in my professional life – I kind of knew about it – but always relied on others to do it. So now, I will at least understand better what’s happening and the impact behind. I love learning!
And guess what: this approach aligns with industry best practices for modern software development and delivery, including:
- Separation of Concerns: Separating the frontend, backend, and worker components into different projects promotes maintainability and scalability.
- Continuous Integration: Regular integration of code changes into a shared repository, along with automated builds and tests, ensures early detection of issues and facilitates collaboration.
- Continuous Deployment: Automated deployment processes enable frequent and reliable releases, reducing the risk of manual errors and accelerating time-to-market.
- Test Automation: Comprehensive testing strategies, including unit tests, API tests, end-to-end tests, and load tests, ensure high-quality software and catch issues early in the development cycle.
- Containerization: Using Docker containers for deployment ensures consistent and reproducible environments across development, testing, and production stages.
To me, this experiment demonstrates the importance of proactively addressing challenges related to project organization, package management, and automation in software development – earlier than later. With tools like Jenkins, Verdaccio, and Docker, I have laid the groundwork for a robust and scalable CI/CD pipeline that facilitates efficient collaboration, rapid iteration, and reliable software delivery.
As my project evolves, I plan to further enhance the automation processes, ensuring a smooth transition to continuous delivery and potential migration to cloud providers.
Let me try to explain to you, what to my taste is characteristic for all intelligent thinking. It is, that one is willing to study in depth an aspect of one’s subject matter in isolation for the sake of its own consistency, all the time knowing that one is occupying oneself only with one of the aspects. We know that a program must be correct and we can study it from that viewpoint only; we also know that it should be efficient and we can study its efficiency on another day, so to speak. In another mood we may ask ourselves whether, and if so: why, the program is desirable. But nothing is gained—on the contrary!—by tackling these various aspects simultaneously. It is what I sometimes have called “the separation of concerns”, which, even if not perfectly possible, is yet the only available technique for effective ordering of one’s thoughts, that I know of. This is what I mean by “focusing one’s attention upon some aspect”: it does not mean ignoring the other aspects, it is just doing justice to the fact that from this aspect’s point of view, the other is irrelevant. It is being one- and multiple-track minded simultaneously.
Edsger W. Dijkstra – 1974 paper “On the role of scientific thought”
Moving forward, I will be adding new pipelines as they become required.
Hope you learned something today!