ChatGPT Experiment follow-up

Did you try to look at my experiment lately? Did it timeout or gave you a bad gateway?


Bad Gateway

Well read-on if you want to know why!

Picture this: I’ve got myself a fancy development instance of a demo application built with ChatGPT. Oh, but hold on, it’s not hosted on some magical cloud server. Nope, it’s right there, in my basement, in my own home! I’ve been using some dynamic DNS from no-ip.com. Living on the edge, right?

Now, here’s where it gets interesting. I had the whole thing running on plain old HTTP. !!! I mean, sure, I had a big red disclaimer saying it wasn’t secure, but that just didn’t sit right with me. So, off I went on an adventure to explore the depths of NGINX. I mean, I kinda-sorta knew what it was, but not really. Time to level up!

So, being the curious soul that I am, I started experimenting. It’s not perfect yet, but guess what? I learned about Let’s Encrypt in the process and now I have my very own HTTPS and a valid certificate – still in the basement! Who’s insecure now? (BTW, huge shoutout to leangaurav on medium.com, the best tutorial on this topic out there!)

As if that was not enough, I decided – AT THE SAME TIME – to also scale up the landscape.

See, I’ve been running the whole stack on Docker containers! It’s like some virtual world inside my mini PCs. And speaking of PCs, my trusty Ryzen 5 5500U wasn’t cutting it anymore, so I upgraded to a Ryzen 7 5800H with a whopping 32GB of RAM. Time to unleash some serious power and handle that load like a boss!

Now, you might think that moving everything around would be a piece of cake with Docker, but oh boy, was I in for a ride! I dove headfirst into the rabbit hole of tutorials and documentation to figure it all out. Let me tell you, it was a wild journey, but I emerged smarter and wiser than ever before.

Now, I have a full stack that seems to somewhat work, even after reboot (haha!).

Let me me break down the whole landscape – at a very high level (in the coming days & weeks, if I still am feeling like it, I will detail each steps). The server is a Trigkey S5. I have the 32GB variant running on a 5800H, which went on sale for $400 CAD on Black Friday – quite a deal! From my research, best bang for the bucks. Mobile CPU, so energy-wise very good, but of course, don’t go expecting playing AAA games on this!

Concerning my environment, I use Visual Studio Code on Windows 11 with WSL enabled. I installed the Ubuntu WSL, just because that’s the one I am comfortable with. I have a Docker compose file that consist of:

  • NodeJs Backend for APIs, database connectivity, RBAC, etc.
  • ViteJs FrontEnd for well.. everything else you see as a user 🙂
  • NGINX setup as a reverse proxy – this is what makes it work over HTTPS

This is what my compose file looks like:

version: '3'

services:

  demo-backend:
    container_name: demo-backend
    build:
      context: my-backend-app
    image: demo-backend:latest
    ports:
      - "5051:5050"
    environment:
      - MONGODB_URI=mongodb://mongodb
    networks:
      - app
    restart: always

  demo-frontend:
    container_name: demo-frontend
    build:
      context: vite-frontend
    image: demo-frontend:latest
    ports:
      - "3301:3301"
    networks:
      - app
    restart: always

  mongodb:
    image: mongo:latest
    container_name: "mongodb"
    ports:
    - 27017:27017
    networks:
      - app
    restart: always

networks:
  app:
    driver: bridge
    name: shared_network

This method is very convenient for running on my computer during the development process. I have a modified compose file without NGINX and different ports. This makes it easier for me to make changes and test them. When I’m satisfied, I switch to another compose file with the redirected ports in my router. I use “docker compose down” followed by “docker compose up” to update my app.

This is the NGINX compose file. The interesting part is I use the same network name here than in the previous compose file. When I do this, NGINX can communicate with the containers of the previous docker compose file. Why did I do this? Well, I have this demo project running, and I’m working on another project with different containers. With some configuration, I will be able to leverage my SSL certificates for both solution (or as many as I want), as well as one compose file per project. This will be very handy!

version: "3.3"

services:
  nginx:
    container_name: 'nginx-service'
    image: nginx-nginx:stable
    build:
      context: .
      dockerfile: docker/nginx.Dockerfile
    ports:
      - 3000:3000
      - 3030:3030
      - 5050:5050
    volumes:
      - ./config:/config
      - /etc/letsencrypt:/etc/letsencrypt:ro
      - /tmp/acme_challenge:/tmp/acme_challenge
    networks:
      - app
    restart: always

networks:
  app:
    driver: bridge
    name: shared_network

Of course, this is not like a cloud provider; my small PC can die, there is no redundancy; but for development and demo purposes? Quite efficient (and cheap)!

As of last night, my stuff runs on HTTPS, and should be a bit more reliable moving forward. The great part about this whole experiment is how much I learned in the process! Yes, it started from ChatGPT. But you know what? I never learned so much so fast. It was very well worth it.

I will not state I am an expert in all of this, but I feel I know now a lot more about:

  • ChatGPT I learned some tricks along the way on how to ask and what to ask, as well as how to get out of infinite loops.
  • NodeJS and Express. I knew about it, but now I understand better the middleware concepts and connectivity. I have built some cool APIs
  • ViteJs This is quite the boilerplate to get a web app up and running
  • Expo and React-Native. This is a parallel project, but I built some nice stuff I will eventually share here. If you want to build Android and IOS apps using React-Native, this framework works great. Learn more on Expo.dev
  • GitLab. I tried this for the CI/CD capabilities and workflows… Oh my! With Expo, this came in handy!! Push a commit, merge, build and deploy through EAS! (On the flip side, I reached the limits of free quite fast… I need to decide what I’ll be doing moving forward) On top of it, I was able to store my containers on their registries, making it even more practical for teamwork!
  • Nginx. The only thing I knew before was : it exists and has to do with web servers. Now I know how to use as a reverse proxy and I am starting to feel that I will use it even more in the future.
  • Docker & Containerization. Also another one of these “I kind of know what it is”.. Now I have played with containers, docker compose, and I am only starting to grasp the power of it.
  • Let’s Encrypt. I thought I understood HTTPS. I am still no expert, but now I understand a lot more how this works, and why it works.
  • Certbot this is the little magic mouse behind the whole HTTPS process. Check it out!
  • MongoDb. I played with some NoSQL in the past. But now… Oh now. I love it. I am thinking I prefer this to traditional SQL databases. Just because.

A final note on ChatGPT (since this is where it all started):

The free version of this powerful AI is outdated (I don’t want to pay for this – not yet). This resulted in many frustrations – directives that wouldn’t work. I had to revert back to Googling what I was looking for. Turns out that although ChatGPT will often cut the time down by quite a margin, the last stretch is yours. It is not yet at the point where it can replace someone.

But it can help efficiency.

A lot.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.