Sterling has too many projects Blogging about programming, microcontrollers & electronics, 3D printing, and whatever else...

My Personal Ecosystem

| 3556 words | 17 minutes | aws kubernetes golang github prometheus ghost email tools
Picture of a colorful artificial reef inside an aquarium featuring clown fish and anemones.

I want to document for myself and for anyone who might be interested (probably mostly me, but maybe a prospective employer will find this interesting) the current state of my personal ecosystem of software, devops, and what I consider state of the art within my personal projects. This article is primarily concerned with ops, but will touch on aspects of software development too.

I will have to divide this up into major bits. I build a lot of stuff for myself, just in dribs and drabs of spare time I steal from when I probably ought to be sleeping or playing video games. So, this is a list of what I have going and how I make it go. Let’s start by listing the products themselves. Then, I’ll describe the overall architecture of my system. Finally, I’ll break down how each project fits into that architecture.

Projects List (Briefly)

  • This is my newest project, but something I’ve been planning for many years.
  • Bethlehem Revisited: A ticketing site I built for a local walkthrough nativity.
  • Hugo: I run four different Hugo-based web sites for myself. One for a small, but sometimes larger group of folks that meet for lunch every week, a 3D printing business I started, but have never tried to market, the more generic web site for my (rare) consulting work, and this blog.
  • STFL: I maintain a small site recording Sterling’s Truths for Life, a set of aphorisms I’ve collected for myself over the past decade or so.
  • Wordpress: I run four WordPress sites for others. Personally, I despise WordPress, but maintaining WordPress sites for others is not hard, once I figured out how to make it work. This includes my son’s podcast, the Coop my wife runs, pro bono work I do for Representative Mike Dodson, and my son’s old blog.
  • Kidbank: A tiny little banking app I created to help track allowances and mercenary payments to my children for extra chores.
  • Static: I host a static web site that is an old portfolio of my sister.
  • Yukki: A wiki app I wrote a long time ago that I still have some things in that I care about.


The overview of the architecture in quick bullets is:

  • AWS as host
    • Three to Five nodes running in EC2
    • RDS running my MySQL server(s)
    • DynamoDB for NoSQL backend for some scripting and sometimes other tidbits.
    • Route53 for managing my DNS (though, I use a different NIC for most of my domain purchases)
    • S3 for storage and file caching (I do not usually use it for serving files, but it is something I’m considering for one project)
    • ECR for some package hosting
    • CodeBuild for old CICD that I haven’t migrated to Github
    • And a couple other small bits and pieces that I might mention in the project descriptions later
  • Github as code repository and CICD engine
    • In the past I’ve made use of CodeBuild, Travis CI, and CircleCI, so I may still have some projects there (I know I have some Travis CI stuff, but I think all of that has fallen into red status and needs to be migrated).
    • All of my code is hosted here and I have several public and private repositories
    • All my latest projects get CICD through Github actions
  • for handling incoming email, where such a thing is needed.
  • Terraform is used to provision all my AWS services. I want to use it to deploy some other bits, but I haven’t gotten a Round Tuit yet.
  • Kubernetes setup on EC2 via Kops is used to configure and run my nodes, pods, deployments, config, etc.
  • Setup-Cluster/Genifest is tooling I’ve built to help generate my Kubernetes manifests, which works a bit like Kustomize, but I started building it long before I knew anything like it existed. My system is very lightweight and is basically just Go templates + a small number of custom functions that serve my needs.
  • ArgoCD is configured to perform actual deployments of software.
  • Sealed-secrets+Ghost is what I used for secrets management. Ghost is a password management tool of my own design that can communicate with LastPass, Keepass, and my other stuff.
  • External DNS for managing DNS
  • Istio+Envoy Proxy for Ingress. I have this setup in as lightweight a way as possible.
  • Cert-manager with Lets Encrpyt for certificate management.
  • Twingate ZTN for securely communicating with my back ends from my local machines.
  • Prometheus and Grafana tooling for monitoring and alerting.

And that’s a lot to take in. I could go into each bit in detail, but I think I’ll just stick to the bits that are fairly unique to me and my setup. This setup is a bit eclectic. This is a consequence of me only working on this with “dribs and drabs” of spare time stolen from when I should probably be sleeping or would be playing video games. It’s also because I treat my personal projects as a home lab to experiment in. If we’re trying something out at work, but one of my coworkers is driving the project, but it sounds useful or interesting, I configure it for myself to try it out. Sometimes things like that stick around long after because I just don’t find the time to fix it. This is not much different from my experience working in devops professionally, where proof of concepts often last much longer than anyone ever anticipated.


I do not actively manage my cluster. In fact, I rarely touch most of it. Therefore, I have a couple requirements:

  1. It has to be self-sustaining and if it fails, I want fixing that failure to be as easy as possible. Usually, I simply kill pods, restart deployments, or restart nodes to fix problems. I don’t investigate problems. I just nuke and let my setup rebuild itself. This is a wonderful way to work and it almost always just works. Once or twice a year, I spend a few hours doing maintenance, upgrading the infrastructure, etc.
  2. It has to be self-documenting. Sterling’s Truths for Life #126: Any code written by me more than 6 weeks ago might as well have been written by someone else. I may not even have the same skills six months later and I have probably forgotten more in the last two years than my younger self learned in ten years as things move so fast these days. The beauty of Kubernetes is that it lays out in nice neat configuration: This is what you are running and this is what is required to run it. Wonderful.

However, there are a lot of pieces to a Kubernetes manifest that are stupid or incomplete. Ideally, there’d be about 3 more layers of abstraction on top of Kubernetes, but I don’t have the money to shell out for the sorts of systems that provide those layers of abstraction (either in compute or in licensing costs as many such systems are commercial).


As such, I created a tiny tool that has slowly grown in size that I call setup-cluster. This tool is private to me and is basically a collection of expedient horrors. I won’t share it. Originally, it did everything from help with kops upgrades to performing service-side applies of manifests to running Terraform for me. I have slowly reduced its responsibilities to just helping me run Terraform and generating deployable manifests, the latter being handled by Genifest.


Genifest is a less horrible tool and one that I was willing to make publicly available. This is the templating tool that takes a set of source YAML files and performs a number of modifications on them to turn them into deployable files. Then, I use ArgoCD to perform actual deployment. It works by taking the YAML files in subdirectories of manifests/source, cutting and pasting them into manifests/deploy, templating any variables needed along the way. It may also perform certain transformations of the YAML based upon Annotations as well. These changes, once committed, will result in ArgoCD performing an automatic deployment. Because of the way it works, it also happens to be a pretty good validator of the manifests as well, so it catches any formatting mistakes or misplaced fields.

Sealed-Secrets with Ghost

Sealed-secrets is a lightweight secret management system that allows secrets to be stored in a git repository without fully trusting your git server (i.e., Github). It does this by using RSA security to store the key so that it can only be deciphered by a service running in the control plane of the cluster, which are then turned into conventional secrets.

Getting the secrets into the sealed-secret manifest while allowing me a means of rotating and updating them as needed, is still a remaining difficulty. For part of this task, I have created a tool named Ghost. It allows me to keep backups of my Lastpass database in a local Keepass and then lets me use the local Keepass as the source for loading secrets into my cluster via the templating performed by Genifest. It’s not a perfect system, but it improves each time I run into a bug and eventually, it will be good, right?


Alright, it is finally time to consider each project in turn. I don’t know if I will actually push to get through all of the ones I listed near the top above, but I will cover the most important ones to me at the moment, and in that order.


OpenScripture.Today is a statically generated web site using custom software of my design. Sometimes I call it OST for short. It provides a scripture of the day according to my own eclectic desires. The code for this application is divided between three different repositories:

  • Private Repo: The central repository is private and it contains the code required for generating content, the source files, and the manifests for use with Genifest and for ArgoCD to deploy from.
  • Today: Today provides a command-line tool I have developed for looking up scripture using the ESV API and is also becoming a client for pulling images from Unsplash, which is related to how OST works. I maintain it mostly because it helps me run OST.
  • Go ESV API: This is a low-level tool for working with the ESV API from Golang.


For this application, I have crafted two different ways of running the development environment, one ultra-lightweight that serves most purposes, and another that is slightly heavier.

  1. One is just a command, built into the application that watches for changes to the local disk and performs the static generation whenever I save a file and then serves the content using a micro-sized web server.
  2. The other is a docker-compose.yml configuration, which runs a prod-like environment locally, but with the static generator turned up so it refreshes more often than it does in production. This is useful for testing some of tooling I have had to build to adhere completely to Unsplash terms and probably other bits that will be added over time.


Once development is complete, I push the changes to git, usually through pull request. This triggers a couple Github actions.

  1. Test. One action runs tests to ensure that my code passes my Go linter rules and that all my unit tests are still passing.
  2. Build. The second action builds the containers and pushes them into ECR for Kubernetes to pull from. It also runs Genifest to update the manifests to refer to the new version of the code.

The build action generates code that is committed to a new branch and generates a second PR that can be reviewed and approved. Approval here, makes changes that ArgoCD will notice and trigger a completely automated deploy upon merge. This is my new favorite thing.

Content Management

In addition to development, I need a way to curate the content. This part is done through what I call the pipeline files. This file is just an alternating list of verse references and links to Unsplash images I want to pair with the verse reference. The pipeline is processed in one of two ways depending on what I want to do:

  1. I run the software locally empty the pipeline and create the static content, which I can then commit and push myself. This will trigger the test/build actions mentioned above. This was the original mechanism I used to add new pages of content to the site.
  2. The new practice now, though, is just to push the pipeline file into the master branch. This triggers an action in Github, which runs the command I just mentioned, adds the changes to a branch, and creates a pull request with the changes.

This creates another PR that I can review and approve. That approval triggers a build and another PR which I approve again. The double approval is not something I’m super fond of, so I’m thinking about ways to combine the steps or to perform automatic approvals of one or the other.

I consider this process of making code changes that lead to PRs to be state of the art. I love gitops and the power it gives you to know exactly which code was running when. This system gives me clean, easy to follow records, and has already made troubleshooting simpler even though I’ve only had this project up for about three months.

Bethlehem Revisited (a.k.a. Gobert)

This is probably the most important project I work on in my spare time. Our church puts on a walk-through nativity called Bethlehem Revisited every December. This is a three day event where groups of around 50 are taken one busload at a time to a nearby trail, led through by a guide who explains the nativity with dramatic scenes portrayed as a set of stations, culminating in a brief time of prayer. It’s very popular and the three day event sees somewhere around two to three thousand visitors each year.

With groups of 50 at a time, though, we can’t just let people show up whenever. They’d all show up at 6:00pm on the Friday of the show and it would be pandemonium. As such, we provide tickets to those who want to visit. These used to be provided as paper tickets distributed at various local businesses, but in recent years, I have provided an application that allows people to receive their ticket by email.


From a devops perspective, this application is dead simple. It needs to be reconfigured once every where and only really requires a single deployment every year. Once deployed, it’s completely self-sufficient. I make all the preparatory changes for the year to the application ahead of time. Then, sometime before the ticket release date, I deploy the latest software and then I sit back and watch the Grafana Dashboard show me the tickets going out.

In this case, deployment is performed via a manual run of setup-cluster still. I plan to move it to use Genifest sometime during the next few months, though. The only other configuration I have to do is make sure that SES will let me send emails. There are some Github Actions for helping make sure I run the linter and tests, but these are quite simple.

Hugo: Zostay

This web site is Hugo-based and I do development with git storing all the textual content. The image content is stored in S3. My content management process is a typical Hugo workflow with a couple extra steps:

  • I pick an image for each post from Unsplash. I use a script that resizes the image before upload and also ensures that I get the attribution metadata setup correctly.
  • Git-lfs is not something I use for various reasons. Instead I have a tool that pushes images from my local environment up to S3, which are then sync’d down during the build process.

Based on what I’ve learned from OST, I’d really like to automate the image handling completely.


Deploying the site is very simple. I just commit and push my changes via git. This runs a Github Action which checkouts out the repository, pulls down all the images from the S3 bucket for images, and then runs the hugo --environment production command to build the site. This site is then pushed into another S3 bucket.

The web site runs a sync command that pulls the web site down from the S3 bucket every few minutes, so I don’t use anything related to Kubernetes to handle these deploys.

This means, for now, I can’t do delayed publishing or certain other things with my Hugo site, but it works for my current needs just fine.

WordPress Sites

I do not use WordPress. I don’t like WordPress, but my friends and family are not technical people and WordPress is pretty much the easiest thing for them to use. So I enable their use of WordPress.

Running WordPress is not problem free in my setup.

WordPress loves state

Running WordPress in kubernetes is not a great fit. Kubernetes works best with stateless systems. However, WordPress insists in 2024 that it really needs a file system anyway. It uses the stateful for some pretty core bits of functionality too that are hard to avoid.

  • The plugin system is built into the stateful setup and
  • Image or asset uploads are handled there.

I wish it supported block storage out of the box, but (last I checked) it doesn’t. I seem to recall searching for S3 plugins for this and unable to find something that works, but that was years ago. The system I have works for now.

WordPress gives me the jibblies

WordPress presents a second problem: it’s popular and historically been problematic for security. I think this is not the problem it once was, but I still have get twitchy thinking about zero day attacks on my WordPress pods, so I need to be able to always keep it up to date.

Build System

To get a container image that I feel confident will stay up to date, I have a job that checks regularly for new docker contains on Docker Hub. I use the standard WordPress container there as the base image. If a new version of the file is released, my check finds it and immediately triggers a fresh build immediately.

My container uses a tool that can install plugins on the command-line and has some other things preconfigured the way I want. Whenever a new build is available, I get a text message letting me know (though, AWS is no longer sending me texts because I have to do some rigmarole to make the telcos happy that I’m not a spammer from the evil, Evil, EVIL Planet Tinko). I then trigger a redeploy using my setup-cluster command.

I have a plan to make these updates completely automatic, which I’ll probably implement when I’m bored some rainy day.

Managing State

The trick I use for state management is just a dumb S3 sync. I have a really dumb shell script I call periodic-s3-sync. This provides a dead simple and very dumb way of performing synchronization of files in either direction as an init container and/or side car.

In this case, I setup:

  1. An init container that performs a one time sync from S3 to a local scratch disk during startup.
  2. Then, perform a periodic backup sync from the local scratch disk to the S3 bucket. That handles uploads.

So long as the number of assets to sync stays small, this is not too unreasonable. A more ideal solution would be to integrate directly with S3 or something, but so far this system works for me.

STFL: Sterling’s Truths For Life

Sterling’s Truths for Life is a list of aphorisms I’ve put together over the years, partly as a joke, and partly serious. The sorts of things a dad likes to repeat over and over again, right?

This site is a snowflake. I really want to find time to fix it, but I keep doing other things instead. It works and it can refresh itself at startup pretty fast, so I can’t really complain, but I’d like it to be something other than what it is. It probably ought to just be a Hugo site with a template I build from scratch, but whatever.


This site is deployed by adding a new truth file, which is just a small text-based file containing the aphorism, an attribution, tags, and any additional content I want to throw in with it. A custom-made Perl command turns these into static HTML. This script is triggered whenever the repository is pushed. That is then encapsulated into a container image. I then have to run setup-cluster to finish deployment.

This is a lot more manual than I like these days, so another one to make state of the art.

Fade out…

I’m going to leave the rest off from here as they all start to resemble what I’ve already described and all probably need an overhaul to state of the art when I can get a Round Tuit. The problem is that I can never leave well enough alone and I have at least two or three new ideas simmering at the back of my brain. If I could ever settle down on something or were better at making friends, maybe I could turn one of these things into money, but in the meantime…


The content of this site is licensed under Attribution 4.0 International (CC BY 4.0).

Image credit: unsplash-logoLI FEI