Sterling has too many projects

Blogging about Raku programming, microcontrollers & electronics, 3D printing, and whatever else...


  • Category: Blog

I am going to be honest and upfront with you. I have almost no opinion about the name change of Perl 6 to Raku. I can see going to war for something meaningful, like religion. I cannot imagine doing so for something trivial, like a programming language. I won’t be upset if you call my hammer a mallet. There are certainly advantages and disadvantages to the name change, but as far as I’m concerned, it was inevitable as long as the issue kept being brought up year after year.

However, I do want to take this opportunity to tell a story about my history with programming, the Perl language, and how my story intertwines with Raku.

Commence third person:

Once upon a time, there was a nerdy little boy named Andy. Andy’s father was nerdy too, in his own way. His father had a thing for electronics. He spent his daytime hours helping people from his ambulance, then helping people buy and sell homes. After that he was helping people learn the benefits of Augmentin, Timentin, Paxil, Bactroban, and whatever other pharmacological wonders Beecham or SmithKline-Beecham were making. (His son remembers the names of all these because he still has a few packs of pens or notepads with these names on them sitting around multiple decades and half a dozen moves later.) Despite his day time work paying the bills, he maintained a certain evening passion for computers and electronics.

Andy’s father thought personal computers were really cool. He would visit the hobby computer store downtown and look over all the 'wares. He would browse computer magazines like Compute! and checkout the latest and greatest equipment and software ads. He didn’t have a lot of money, so he decided to start modestly, with an Atari 400. Armed with an Atari 400, Atari BASIC, and a special cassette tape deck, he taught Andy how to copy BASIC programs out of Compute! magazine to make simple video games. A new software developer was conceived in the slow hours of pushing buttons on that horrid membrane keyboard. Being only 5 or 6 and barely able to read, his programs didn’t work, but his father painstakingly debugged these programs, armed only with a pre-med degree, a desire to teach, and a love for these quirky electronic computer boxes.

As Andy got older, Andy got tired of being Andy. He decided that he need a more mature moniker. So he became Andrew. Andrew’s father eventually replaced the Atari 400 with a Zenith Heath Kit 8086 (properly pronounced the “eight-oh-eighty-six” in case you didn’t know). It was an amazingly heavy and bulky machine, with an external keyboard and a green monochrome monitor. It was very powerful, though, with its bank of 16KB of memory and two 5¼" double-density floppy drives. No more fiddling with cassettes! But it had to be assembled. It was a Heath Kit after all. While Andrew’s dad was assembling this 8086, solder by solder, Andrew spent the summer taking computer classes from Ms. Francisco at the local community center.

One of Andrew’s father’s favorite stories is the day he finished putting that 8086 together. He snarkily said to his son, “Well, now that it’s built, what do we do with it?” His son, being a very literal young man rolled his eyes and replied, “You put a disk in like this and turn it on like this.” He quickly booted up this new machine and showed his father how to use this new fangled DOS thing.

Andrew continued to copy programs out of new editions of Compute!, but now for GW-BASIC running on Zenith Z-DOS 1.19. He also began dabbling in writing his own programs. These were extremely simple, but his knowledge was beginning to expand and mature. Along the way the 8086 picked up another 48KB of RAM, a 20MB hard drive, a text-to-speech board, a 2400-baud modem, and then a final upgrade to 640KB. That final upgrade mod involved Andrew’s father spending days soldering hundreds of memory chips to a circuit board that seemed a mile long. He had to hot wire in extra connections from the power supply to allow it to work as the regular bus connection could not provide the required amperage by itself.

Around the age of eleven, things were changing rapidly for Andrew. The old 8086 was not aging well and was replaced by an 80386DX. His best friend, Lucas, who would eventually secure his own pre-med degree, had recently secured a copy of Turbo Pascal. He reluctantly shared a bootleg copy of this software as well as the name of a file on the local Sherwood dial-in BBS containing a Pascal tutorial. Armed with a ream of continuous feed paper printed all day and all night on the tremendously loud Epson dot-matrix printer, he began to teach himself the wonders of Pascal.

After moving from his childhood home in Lawrence, Kansas to a new home in Wichita, Andrew kept working in Pascal. He had few friends in his new home and none with whom he really felt close. In a period which is as pathetic as it sounds, Andrew’s computer was his best friend, now a 486DX2/66. Continuing in Pascal, he could build tools for generating character sheets and manage the Mechwarrior games and other table top games he played with his brother. (Though, we called them “RPGs” even when they weren’t RPGs in those days.) Soon, Pascal started to even edge out video games as a dominant use of his time.

High school offered new opportunities to Andrew. The school taught an intro to programming course and an AP computer science course in Pascal. Pascal was old hat. He thought it would be a breeze and it was. He completed the intro course on the ancient Apple 2e computers the school still had in only 4 weeks, and spent the rest of the semester helping others and writing graphics demos for fun. The AP course was completed with equal speed, but it taught him several new tricks he hadn’t learned before along the way but now on newly acquired Apple PowerMacs.

At this point, however, he was no longer satisfied with Pascal. He really wanted to learn the language that all the best software of the day was written in, C++. [insert laugh track] He managed to get his hands on a copy of Turbo C++ and delved into the details of memory models, embedded assembly, and graphics drivers. For his sixteenth birthday, Andrew’s father did not buy him a car or any of the things more typical 16 year old might like. Instead, Andrew received Borland C++ 3.1 with Application Framework. Twenty pounds of manuals, forty 3½" floppy installation disks, and all. Best. Birthday Present. Ever.

Along the way here, he became convinced that Christianity was a pretty big deal in his life. And so, when he went to choose a college, he picked one where he could learn more about Christianity and computer science. At the small college where he ended up in, there were two other Andrews on his floor. They decided that it was too confusing for everyone to be Andrew, so the sophomore kept his name and the two freshman changed to use a different name. Our Andrew became Sterling, a middle name Andrew shared with his grandfather, who despised the name. (His grandfather much preferring his own first name, Delmont, but that’s a story for another time).

At college, everything was Java because Sun had managed to convince State U. to teach it. Sterling definitely never liked Java. It always seemed slow, verbose, and weirdly stilted. He was used to passing around function pointers, Java had none. Sterling had become accustomed to releasing his own resources, Java didn’t let you, preferring instead to lock up your program for multiple seconds at a time to do that for you more efficiently. He much preferred C++, but even C++ was troublesome, especially as that language continuously increased in complexity in a false search for perfection. He wasn’t satisfied with what he knew. He even began reading books on language design and grammars with the thought that maybe he wanted something different.

However, life goes on and while he may have written a few toy langauges, he didn’t dive deeply enough to make anything meaningful. Besides, he needed a real job. So to help pay for college (and pay for being married and to help his wife through college), he took a job as a Network Consultant. Between classes he would travel around northeast Kansas in his Dodge Dakota and plug in computers at grain distributors, configure NetWare servers at accounting offices, and pull all-nighters to rebuild Linux mail servers for police stations, etc.

Along the way, Sterling encountered his first Perl program, Anomy Sanitizer, an early e-mail protection program. Sterling’s boss wanted to customize it to deal with a particular problem one of his clients was having. Sterling did and in the process was introduced to Perl 5.6. He didn’t think much about it in 1999. Perl looked sufficiently C-ish that he could wade through it without really learning it. So he solved the problem and moved on.

As he finished up his undergraduate degree and begin grad school (to the surprise of everyone, including himself), he briefly took a job as a research assistant. The research Sterling was helping with generated copious piles of data in long log files. However, to understand the data, he needed to summarize it, as anyone familiar with C can tell you, parsing textual data in C and summarizing it is a real pain, which is why almost no one uses it for that task directly. Sterling decided to write the summarizer in new language he’d just heard of called Python and quickly discovered he was not a fan. So he decided to try again in Perl, and he found the language to be familiar and expressive and wondered why he hadn’t noticed how nice it was during his previous experience.

This was around the time when Perl 5 was first experiencing some pain around major changes certain members of the Perl community wanted to see made to the language. This was the era of the first Perl Apocalypses. As Larry and Damian began writing apocalypses, then exegeses, and finally synopses, Sterling read these avidly and found them fascinating. Sterling drank it all up. He really liked the way Larry Wall thought about software problems. Part of it was the way he entangled his Christian beliefs was intriguing. This just felt right to Sterling. So Sterling decided that if he could, he wanted to work in Perl until this Perl 6 become a reality and he could work in this fantastical new language.

Over the years, he watched developments of tooling like Parrot and Pugs and MoarVM and NQP, mostly from afar. Though, Sterling did briefly teach a Computer Architecture class at State U. He taught the VM part of the class using Parrot back in 2003. He also completed his Master’s Report using Perl to build a multi-agent system coordinated through a Perl-based ticketing system, called RT.

Then, he got a real job. His real jobs initially involved programming PHP, Perl, and Java, then focused on Perl and JavaScript, and later became an amalgam of Perl, Python, JavaScript, Go, Bash, and various others. His hobby and home projects generally involved Perl, but also Bash, C, C++, Java, JavaScript, PHP, Python, and Scala. As with anyone who lives in the unix-ish world, Sterling was a polyglot, of course.

Fast forward to around 2015 and the rumors were getting louder that when Larry Wall said Perl 6 would be released on Christmas, he meant this Christmas. It was finally time to start really digging in. Sterling started by seeing about porting some modules. He then got to remake his efforts following the Great List Refactor of 2015. Since 2015, Sterling has worked to steadily improve his fluency in the language, releasing more than a dozen modules to the Perl 6 ecosystem, and giving some talks at The Perl Conference related to his O(fun) work.

For those fun at-home projects in his spare time, if it can be reasonably done in Perl 6, Sterling has been doing it in Perl 6. Moving forward, if it can be done in Raku, he will be doing it in Raku. However, over the past 20 years, he’s also developed a deep and loving knowledge of Perl 5. Perl 5’s greatest weakness is its commitment to reverse compatibility. Raku’s greatest weakness is its commitment to break things in a significant way every year. This year’s break: Renaming to Raku.

That’s Sterling’s story of Perl/Raku, the Universe, and Everything.

Returning to first person, Perl still pays my bills (rather well!) and I see no reason to believe that’s going to change. As of this writing, I just completed my first production project in Raku as a volunteer gig. (I hope to write about that soon.) Yet, I have never been paid to write Raku and can’t imagine at this point when I ever will be. Times change, but I’m a go-with-the-flow kind a guy, so it’ll happen if and when it happens. In the meantime, I’m having fun using both of these hammers (or mallets) to bang on the problems I come across. I love both Perl and Raku and I don’t intend to give up loving either of them just because of this name change. I hope to keep a foot in both communities as well.


P.S., Special thanks to my wife who helped edit this. I love you.


  • Category: Blog

First, I want to say thank you to Andy Lester who has been project lead on vim-perl6 as well as the other contributors, especially Hinrik Örn Sigurðsson, Rob Hoelz, and Patrick Spek.

The plugin will now be homed in the new Raku organization on github here:

As of right now, it has been updated such that it will handle the old Perl 6 filenames as well as the newer ones, so far identified as .raku, .rakutest, and .rakudoc. It also handles .t6 which is something that the plugin didn’t handle previously.

I also renamed the internals so most of the references to p6 or perl6 have been changed to raku, so when you update the plugin, if you made use of any of the (mostly undocumented) perl6_ variables you will need to update those settings accordingly. I have not made any non-trivial changes to how the plugin is structured or works.

I have also modernized several of the keyword lists. For example, the async keywords such as react, supply, and whenever will now highight as well as several previously unhighlighted traits such as DEPRECATED and pure. I have also removed highlighting for some older keywords that date back to Pugs, such as async and defer.

I will try to work with the Bram Moolenaar to get these changes into the vim release, if I can. However, in the meantime, you’ll probably need to install the module from the master branch if you want it.

I will begin migrating issues from the old vim-perl6 project and see if I can take a whack at any of them that are outstanding. However, I would very much appreciate help with maintenance on this, so if you depend on vim and make use of this plugin, I would very much welcome your feedback and pull requests.

Happy New Year and Cheers!


  • Category: Blog

Quickly now, let’s consider the difference between a sub and a method. When programming Perl 6, the only significant difference between a sub and a method is that a method always takes at least one positional argument whereas a sub only takes what’s listed in the parameter list. In a method, the required first positional parameter is not passed as part of the parameter list, but assigned to self.

For example,

class Demo {
    has $.value;
    sub foo(Demo $val) is export { put $val }
    method bar() { put self }
    method Str() { ~$.value }

import Demo;

my $demo = => 42);
foo($demo);  # OUTPUT: «42»
$; # OUTPUT: «42»

Ready for the trick? The subroutine can be called as a method too, like this:

$demo.&foo; # OUTPUT: «42»

That’s it. Any subroutine can be used as a method by using the .& operator to make the call. The object before the operation will be passed as the first argument.

My favorite usage of this feature is this one:

use JSON::Fast;
my %data = "config.json".IO.slurp.&from-json;

There’s more, but I’m just posting this quickly.



  • Category: Blog

I started this post as a rehash of Modules with some additional details. However, as I started running my examples, I found out that while the documentation on modules is good, it does not tell the full story regarding exports. As I do not want to write a manifesto on module exports, I’m going to assume you already read the above document and understand Perl 6 exports. If not, go read it and I’ll wait for you to return.

Ready? Okay, let’s go.

First, let me explain why I’m on this odyssey: I am writing a module, let’s call it Prometheus::Client, and that module really needs to export some symbols to be useful. However, due to how I’ve structured things, I would might prefer that the symbols I export actually be located in other compilation units, for example, the file declaring the Prometheus::Client::Metrics module. That means I need a way to re-export the exports of another module. I’ve done this before on a small scale for some things, but this is going to be much expanded. I wanted to make sure I knew what Perl 6 was doing before I started. In the process I discovered that the exports rabbit hole is deeper than I’d originally thought.

Let’s start with this simple statement from the Modules documentation I mentioned above:

Beneath the surface, is export is adding the symbols to a UNIT scoped package in the EXPORT namespace. For example, is export(:FOO) will add the target to the UNIT::EXPORT::FOO package. This is what Perl 6 is really using to decide what to import.

This is followed by the claim that this code:

unit module MyModule;
sub foo is export { ... }
sub bar is export(:other) { ... }

Is the same as:

unit module MyModule;
my package EXPORT::DEFAULT {
    our sub foo { ... }
my package EXPORT::other {
    our sub bar { ... }

If I were a “fact checker” I’d have to rate the quote “is the same as” regarding these two code snippets as “Half True” at best. It does, in fact, create these packages. However, that is not all it does.

This becomes clear if you understand the implications of the Introspection section of that same document. There it shows code like this:

use URI::Escape;
say URI::Escape::EXPORT::.keys;

And this:

say URI::Escape::EXPORT::DEFAULT::.keys;
# OUTPUT: «(&uri-escape &uri-unescape &uri_escape &uri_unescape)␤» 

If you aren’t careful, you won’t see it. I didn’t until I started playing around with code to see what works. Those is export lines in that MyModule example above do not only create the UNIT scoped package. They also create an OUR scoped package inside the current package namespace that can be used for introspection.

So, if you really want to replicate what is export does internally when it calls Rakudo::Interals.EXPORT_SYMBOL, you will have to do something like this for the complete MyModule-without-is export implementation:

unit module MyModule;

my package EXPORT::DEFAULT {
    our sub foo { ... }

my package EXPORT::other {
    our sub bar { ... }

    # Create a package object to be MyModule::EXPORT
    my $export = Metamodel::PackageHOW.new_type(:name('EXPORT'));

    # Create a package object to be MyModule::EXPORT::DEFAULT
    my $default = Metamodel::PackageHOW.new_type(:name('DEFAULT'));

    # Add the &foo symbol to the introspection package
    $default.WHO<&foo> := &UNIT::EXPORT::DEFAULT::foo;

    # Create a package object to be MyModule::EXPORT::other
    my $other = Metamodel::PackageHOW.new_type(:name('other'));

    # Add the &bar symbol to the introspection package
    $other.WHO<&bar> := &UNIT::EXPORT::other::bar;

    # Add DEFAULT and other in EXPORT
    $export.WHO<DEFAULT> := $default;
    $export.WHO<other> := $other;

    # Add EXPORT in MyModule
    MyModule.WHO<EXPORT> := $export;

I haven’t yet found a cleaner way to do all that extra stuff at the bottom without doing all this introspective symbol table munging. However, code like this will get you pretty close to what is export does internally. I also haven’t even delved into what an EXPORT sub does by comparison. I’ll save that for another time.

I should also mention that I came across a module in the Perl 6 ecosystem named CompUnit::Util which might be useful to me on my Prometheus::Client problems and maybe even for setting up the two EXPORT modules too. However, I haven’t really dived into that any farther than noting the age of the code and that it makes use of undocumented-but-roasted methods of Stash to do whatever it does. I may look at it later when I’m less tired. Or maybe I will just decide to do what my new library in a completely different way. Whatev'.



  • Category: Blog

A year (or probably two now), I converted all of my personal services into a Kubernetes cluster. We’ve been moving toward Kubernetes at work and so I wanted to know how it worked from the inside by building a cluster myself. Part of that hosting work includes hosting a site for the homeschool coop my wife leads, a web site for a weekly “geek lunch”, and also hosting our city mayor’s campaign web site. I’m not an especially huge fan of WordPress, but it does the job as long as you’re careful with it.1

Up until now, I’ve always performed the necessary WordPress updates semi-manually. I setup a CICD system that builds my custom WordPress container image whenever WordPress releases a new patch and then sends me a text message. I then have a script I run on my laptop which installs the latest patch with a single click and then I check my WordPress sites manually to make sure they came back up. Tedious, but WordPress releases are not super often and it’s only three sites, so no biggie. However, it has been on my to-do list to setup automatic updates. The prerequisite for this, though, is monitoring so I will know when an update goes pear-shaped. As I require monitoring to succeed at another volunteer project I’ve taken on, it is now time to get that monitoring setup. Here’s what I did, in case it helps others, particularly those with micro-sized Kubernetes clusters like mine.

Setting up Prometheus

First, I based my work on the work of others. I virtually copied the work of Linux Academy’s Running Prometheus on Kubernetes, written earlier this year. It got me started in 5 minutes without really understanding anything about how Prometheus works. I will not to repeat anything you can find there.

Once this is in place, you will have a pod that will give you various statistics about your Kubernetes cluster. However, the Linux Academy article doesn’t explicitly tell you how to get at them. There are a couple options, but if you don’t intend to publish your Prometheus server through a load balancer, I suggest just using a port forward setup whenever you need to see what’s going on in Prometheus. I need to put the following into a script so I don’t have to remember it, but here’s the formula:

kubectl port-forward -n monitors deployment/prometheus-deployment 9090

(I renamed the monitoring namespace monitors for my use because reasons.) As long as that command is running, I can hit localhost port 9090 in my browser and see what Prometheus is doing.

Setting up WordPress

Okay, so now I have Prometheus monitoring Kubernetes, but that tells me nothing about WordPress yet. For that you have to understand something about Prometheus: it depends on something called an “exporter” to provide metrics to Prometheus. Basically, you need an HTTP endpoint for anything you want to monitor that will return a set of text lines describing the current state of the service.

If you look for published WordPress plugins for something to do this, you probably won’t find much. After some Googling, I came across an article on Erwin Müller’s blog titled Monitoring WordPress with Prometheus in a Kubernetes Cluster. He employs a couple different exporters, but I think one of them is redundant. I chose to just go with the second one he uses because it’s simple and I can install it straight from Github. Therefore, I forked wordpress-exporter-prometheus from origama to start. That way, I control the source code in case he decides to make some drastic change or even just remove the project from his Github account or whatever.

I added this plugin to the configuration of all of my WordPress pods and activated it in each. So now I have an endpoint in each named /wp-json/metrics that contains metrics in the format Prometheus can use. I’ve just kept it public because it’s really not too scary if someone finds out how many draft posts or total user accounts there are on these sites. However, if the metrics were secretive, I would want to add basic auth or something to them.

Then, I added the following lines to each of the service configurations for WordPress deployments in Kubernetes:

  annotations: 'true' '80' '/wp-json/metrics'

Prometheus can use these annotations to automatically discover the metrics to monitor for each service. Prometheus will (with the configuration from the blog post mentioned above) import metadata related to each such service, deployment, pod, etc. using that metadata.

Setting up all the rest

At this point, I have a working Prometheus server, a plugin for reporting metrics (including the up/down signal I am most interested in), and Prometheus is collecting these metrics. But now what? I have to do something with those metrics. Now, I need to get from here to paging me when something goes awry. I pieced the rest together from reading the Prometheus Github repositories and not-so-very-nice reference docs on the Prometheus web site. However, in the end, I did find and complete the following steps to reach my goal.

To get from here to the finish we need the following:

  1. We need to setup rules to identify the metrics we are interested in signalling on.

  2. We need a way to receive the pager alerts.

  3. We need to setup Alertmanager to do the work of turning the alert signals into working pager alerts.

Writing the alert rules

The rules are part of Prometheus proper. To set these up, first you need to add something like this to your prometheus.yml configuration:

  - 'alerts.rules'

Rules are a way to ask Prometheus to store extra computed information about your metrics. Rules are also used to identify alerts. From here, I created my alerts.rules file like so:

- name: AppMonitors

  - alert: CriticalDown
    expr: up{monitor_priority="critical"} == 0
    for: 5m
      severity: critical
      summary: "Critical App {{ $labels.kubernetes_name }} down"
      description: "{{ $labels.kubernetes_name }} has been down for more than 5 minutes."

  - alert: ImportantDown
    expr: up{monitor_priority="important"} == 0
    for: 15m
      severity: important
      summary: "Important App {{ $labels.kubernets_name }} down"
      description: "{{ $labels.kubernetes_name }} has been down for more than 15 minutes."

I am completely an amateur at this, so this is probably badly done and I actually know that something is wrong with my template variables since they are not be interpolated correctly in my alerts, but this is the gist. Despite these problems, however, it still does what I want. When something goes down, my phone nags me about it. I can fix the details later.

As an aside, you should make liberal use of labels on your services when coordinating things like Prometheus. For example, the service config for my most important WordPress sites include this label:

    monitor_priority: critical

The alert rules above identify up metric for sites matching on these labels. If a service with these labels gets a false value, it will be added to the alert metrics and trigger an alert. After installing this configuration, I can see the status of these alerts in the Prometheus web interface. If I deliberately take a service offline, the status on the Alerts section of the web interface changes. Therefore, we now have the alerts identified.

Setting up Opsgenie

Now we’re ready for setting up how we want to receive our notifications. I am not going to use email to receive alerts. This is not 2001. My email is flooded with too much noise already and I’ll just ignore them there like I ignore 99% of my email. These alerts need to make my phone ding and annoy me until I fix them. This is where Opsgenie comes in. The two obvious picks (in my mind) were either PagerDuty or Opsgenie. However, Opsgenie has a free tier and PagerDuty does not, so Opsgenie wins. Honestly, I could probably get by with an SNS queue, but Opsgenie is easy to configure in Prometheus, so let’s go with it.

I setup a free account for myself, setup a team with myself as the sole member so I get to be the one on call all the time, and configured an integration. I copied down the API key and now I’m ready to configure Alertmanager to connect my alerts to Opsgenie.

Setting up Alertmanager

The last step is that I need a tool running in my cluster to forward the alerts from Prometheus to Opsgenie. The tool for this is called Alertmanager, which is another tool in the Prometheus ecosystem. For this, I crafted my own setup from scratch. My Kubernetes configuration for Alertmanager looks like this:

apiVersion: v1
kind: ConfigMap
  name: prometheus-alertmanager-conf
    name: prometheus-alertmanager-conf
  namespace: monitors
  alertmanager.yml: |
    receiver: opsgenie

    - name: opsgenie
    - api_key: SECRET
apiVersion: extensions/v1beta1
kind: Deployment
  name: alertmanager
  namespace: monitors
  replicas: 1
        app: alertmanager
        software: prometheus
      - name: alertmanager
        image: prom/alertmanager:v0.18.0
        - "--config.file=/etc/alertmanager/alertmanager.yml"
        - containerPort: 9093
          name: alertmanager
        - name: prometheus-config
          mountPath: /etc/alertmanager/
      - name: prometheus-config
          defaultMode: 420
          name: prometheus-alertmanager-conf
apiVersion: v1
kind: Service
  name: alertmanager
  namespace: monitors
  annotations: 'true' 8080
    app: alertmanager
  - port: 8080
    targetPort: alertmanager

That’s it. I loaded that into Kubernetes. The Alertmanager is now ready for me to send alerts through it. We can test it real quick by setting up a port forward:

kubectl port-forward -n monitors service/alertmanager 8080

And then send it a test alert via curl like so:

curl -H "Content-Type: application/json" -d '[{"labels":{"alertname":"Test"}}]' localhost:8080/api/v1/alerts

About 5 minutes after running that curl command, my phone dings to let me know an alert has been received. (The delay is because I’ve left all the group delay and other defaults in place for now.)

That’s pretty much it for Alertmanager. However, there’s one teensy little thing we need to do Prometheus to complete the configuration. Prometheus needs to push alerts to Alertmanager. This is done via the following configuration in in prometheus.yml:

  - static_configs:
    - targets:
      - 'alertmanager.monitors.svc:8080'

After all this, I can restart my Prometheus pod and when one of my WordPress sites has down time, my phone pages me within 10–20 minutes. That’s good enough for my little teeny sites. I tested by deliberately taking one of the sites offline.


I hope that this information is useful to someone on the Internet. If not, it will end up being useful to me in 14 months when I next go to figure out what I did to set this up and what I need to remember when I next need to work with it.


  1. By being careful, I mean always keep WordPress and all plugins and themes patched and up to date and be very careful and conservative about which plugins and themes you install.  ↩


  • Category: Blog

I am working on a robotics project with my 3 boys. For this project, I have designed the core components to be a Raspberry Pi, an Adafruit Circuit Playground Express (CPX), and an Adafruit Crickit Hat. The Raspberry Pi will be running firmware written by me to control the Crickit Hat. This firmware will directly drive the sensors, the motors, the servo, and other hardware. The Raspberry Pi also communicates over USB to the CPX to provide high level sensor information and receive commands. My sons will be programming the CPX micro-controllers using Microsoft MakeCode which provides a simple block programming interface for the younger boys and a TypeScript interface to the oldest one.

That’s all fine and relatively straightforward until we try to actually get things talking to each other. The first problem I ran into is that even though every board (as far as I am aware) that MakeCode works with features an on board serial-to-USB chip, MakeCode firmware opts not to use it. And yet, it supports serial communication. Instead of using USB as a serial device (I mean, USB is the Universal Serial Bus after all), they send serial data over a custom HID protocol they call HF2 (the HID Flashing Format, cute abbreviation, eh?). It supports other commands like resetting the firmware and flashing the firmware, but basically the sort of stuff that an Arduino IDE does over USB with the help of the hardware chip. This is silliness in my opinion, but whatever. I will cope.

The coping involves creating a NativeCall wrapper for Perl 6 called Device::HIDAPI, which wraps the hidapi C library. This library can be used to access non-standard HID devices to send and receive HID reports. The same library is portable across Windows, Mac, and Linux. Whee! My main laptop is a Mac and everything is working find when I communicate from my laptop to say XBox controller or the CPX for testing. Cool.

However, when I get ready to test it on Linux, either to run with Travis CI or run on the Pi, I have a new problem. The library on Mac is named libhidapi.dylib which translates to 'hidapi' in the Perl 6 NativeCall interface.

sub hidapi_init(--> int32) is native('hidapi') { * }

However, the library has a different name on Linux. In fact, it can have two possible names. On Linux, the library is named either or on Linux because there are two different implementations. This means I need the code above to be effectively:

sub hidapi_init(--> int32) is native('hidapi-libusb') { * }
# OR
sub hidapi_init(--> int32) is native('hidapi-hidraw') { * }

Obviously, I can’t do that. We could start working towards something workable by going to a constant we can easily switch for all methods:

constant HIDAPI = 'hidapi';
sub hidapi_init(--> int32) is native(HIDAPI) { * }

However, I do not want to modify that constant every time I switch between Mac or Linux. That’s a nonstarter. I most certainly don’t want to tell everyone installing it from the ecosystem that they have to fetch it, edit a file, and then build and install it. I’m not a hater.

As an avowed Linux nerd and shell programming geek, the thing I really want to do is something like this:

sub hidapi_init(--> int32) is native(HIDAPI) { * }

However, while that might look sensible at first blush it will definitely not work the way it looks like it does. What’s the problem? The is native trait will only be set at compile time (i.e., the point when zef install Device::HIDAPI is run). Yet, using an environment variable suggests that this is something that can be set on every run. It won’t be. This is bad news bears. Don’t do it.

It would be possible to use something like no precompilation to force a fresh build every time, but that means Rakudo is going to be generating the stubs and code for NativeCall and compiling the C and the MoarVM byte code every single time the library is used. That’s horrible. I don’t want to do that. I mean, really, setting an environment variable to choose a library name at runtime is a pretty ugly kludge in my opinion anyway. I won’t do it.

My solution is to add a to the project. This is a somewhat undocumented feature for building a Perl 6 module (at least, I couldn’t find it on my last search of, but it is the correct way to introduce any compile-time setup your module requires when distributed through the Perl 6 ecosystem.

Basically, a file defines a class named Build that has a method named build which is called at build-time. (Are you seeing pattern here yet?)

use v6;

class Build {
    method build($workdir) {
       # do build-time stuff here

The $workdir that is passed is the path to the directory in which the project is being built from. From there, your build method can modify the project in any way necessary to suit the needs of the project.

If you are familiar with autoconf, what comes next will be familiar. To handle the hidapi case, I created 3 nearly identical Perl 6 scripts that each look something like this:

use v6;
use NativeCall;
sub hidapi_init(--> int32) is native('hidapi') { * }

I have one for each potential library name. Then I have code that iterates through each name and checks to see if the code runs without error:

    constant LIBS = <hidapi hidapi-hidraw hidapi-libusb>;

    # returns the libraries that run without error
    method try-libraries($workdir --> Seq) {
        gather for LIBS -> $try-lib {
            try {
                EVALFILE "$workdir/test/$try-lib.p6";

                # if we get here, the code didn't blow up
                take $try-lib;

Then, I select the first library found and put it into an auto-generated config package I can reference in my main code:

    method build($workdir) {
        my $lib = first self.try-library($workdir);
        mkdir "$workdir/lib/Device/HIDAPI";
            # DO NOT EDIT. This file is auto-generated.
            use v6;

            unit package Device::HIDAPI::Config;

            our constant \$HIDAPI = q[$lib];

Now, whenever someone (probably mostly me) installs my module with zef install Device::HIDAPI, this build script will run, test to see which hidapi library is available, and create the configuration file. And with that, I have a pre-compiled and portable builder for my library.



  • Category: Blog

For my first real post of my new blog, let’s talk about multi-stage docker builds. This blog is built with the aid of just such a build. A multi-stage docker build gives you the ability to build multiple containers. In the case of my build, it helps me build a single end-product that’s uncluttered by extra build configuration and tooling.

Why multi-stage?

Docker holds a place in the modern development pipeline for deployed applications similar to the role the classic Makefile has in building packaged applications. (There’s a reason the Dockerfile and Makefile use a similar nomenclature after all.) However, when building a deployed application, you often need to perform build tasks that create clutter. For example, if you’re building a C program, you’ll like need to install the build tools (sudo apt-get install build-essential) and then when you run make you get all those intermediate .o files needed for linking, and maybe you need to install Perl or Python to help run some of the glue code, etc.

Ideally, to save space, improve security, and just generally avoid having extra junk laying around, you want to avoid having all that flotsam and jetsam laying about in your container images. You could try to be extra diligent and uninstall those things and delete your files, but you probably won’t and there’s no guarantee you’ll really be able to get things pristine again. Furthermore, due to the multi-layered nature of a docker container image, those files you delete are still sort of there, they’ve just been hidden.

One way to rescue yourself from these problems is to use a multi-stage build in your Dockerfile. A multi-stage container helps clean-up the clutter without having to perform rm commands or apt-get remove. It also prevents carrying around deleted files in hidden layers because your final build can simply omit those layers when you use a multi-stage build.

How to multi-stage build?

A multi-stage build is really easy to do. First, make sure you have the latest version of Docker installed because this is a newer feature. (Also, security updates are thing. Why would you run an older version of Docker!?) Second, you need to have multiple FROM commands in your Dockerfile. Each FROM starts what amounts to a new container image:

FROM debian:latest AS builder

RUN apt-get install git build-essential cmake
COPY /c-src /scratch

WORKDIR /scratch
RUN make

FROM rakudo-star:latest AS release 

COPY --from=builder /scratch/cool-program /usr/bin/cool-program

COPY /p6-src /app

This Dockerfile will now build two separate container images. The second of which will contain a file built in the first. All the extra source code and junk added to build the cool-program is only in the build container and left ouf of the final. This leaves your cooler-program completely free of all that unnecessary clutter.

If you build this and push the release container to Docker hub, only the build system is going to have the parts of the image for builder. You can, if you want, push the other images in the multi-stage build, but for this example, I probably wouldn’t. Only the release is important.

This Web Site

I am write this because I wrote just such a Dockerfile to build this web site, which is built from a three-stage Dockerfile.

  1. Stage 1: Build MultiMarkdown from source.
  2. Stage 2: Install the Perl 6 code for the static site generator and all dependencies.
  3. Stage 3: Copy the built bits to the final result for deployment.

This demonstrates how to cleanly build a Perl 6 installation without keeping the original source around.

Regarding Perl 6 Builds

If you are doing this with Perl 6, I the following knowledge may save you some time: When you use zef to do your build on a rakudo-star base container, all the build files go into the directory named /usr/share/perl6/site. You can safely copy that from one Rakudo* to another Rakudo* and you will have everything you need.


P.S. Here’s the source for this Dockerfile as of this writing. The above example is shorter, but I haven’t actually run that one. This one should be working because I copied it straight from the project into here.

FROM debian:latest AS multimarkdown-builder

RUN apt-get update -y
RUN apt-get install -y git build-essential cmake

RUN mkdir /scratch
WORKDIR /scratch

RUN git clone --recursive multimarkdown \
    && cd multimarkdown \
    && make release \
    && cd build \
    && make

FROM rakudo-star:latest AS zostay-builder

RUN zef update

COPY . /app


RUN zef install .

FROM rakudo-star:latest AS zostay

COPY --from=multimarkdown-builder \
    /scratch/multimarkdown/build/multimarkdown \

COPY --from=zostay-builder \
    /usr/share/perl6/site \


ENTRYPOINT ["/usr/share/perl6/site/bin/zostay"]
CMD ["build-loop", "/src", "/dst"]


  • Category: Blog

So here’s the story. For most of my adult life I have had a blog of some kind. I like to write. It’s a great way to blow off steam. I also like to brag about things I’m doing or learning about. It’s a pride issue, but I like to think I do interesting things others would be interested in too.

My last blog atrophied as such things do. My posts had gotten short enough to be in Facebook post range, so I’ve been doing just that then. However, Facebook doesn’t really have the right audience for the writing I want to do at this time, so I’m starting a blog up again.

This blog is going to be laser-focused on the various O(fun) projects I’m working on. These will generally involve programming in Perl 6, electronics, micro-controllers, and 3D printing. It may also involve things about home improvement, programming in other languages, or other technical type things I’m learning about and find interesting.

In the past, I have often blogged about philosophy, religion, and politics, but the current cultural environment tends to erase my point of view on these things. A public blog on the Internet is no longer a safe space for me to have such discussions.

For the time being, there will be no comments section, but if you want to see one added, tell me in the comments. Hur hur. (I may experiment with adding comments at some point, but I’m more likely to start a subreddit or encourage discussion on Twitter or elsewhere instead.)

Anyway, that’s the intro.