Sterling has too many projects

Blogging about Raku programming, microcontrollers & electronics, 3D printing, and whatever else...

»

Quickly now, let’s consider the difference between a sub and a method. When programming Perl 6, the only significant difference between a sub and a method is that a method always takes at least one positional argument whereas a sub only takes what’s listed in the parameter list. In a method, the required first positional parameter is not passed as part of the parameter list, but assigned to self.

For example,

class Demo {
    has $.value;
    sub foo(Demo $val) is export { put $val }
    method bar() { put self }
    method Str() { ~$.value }
}

import Demo;

my $demo = Demo.new(value => 42);
foo($demo);  # OUTPUT: «42»
$demo.bar(); # OUTPUT: «42»

Ready for the trick? The subroutine can be called as a method too, like this:

$demo.&foo; # OUTPUT: «42»

That’s it. Any subroutine can be used as a method by using the .& operator to make the call. The object before the operation will be passed as the first argument.

My favorite usage of this feature is this one:

use JSON::Fast;
my %data = "config.json".IO.slurp.&from-json;

There’s more, but I’m just posting this quickly.

Cheers.

»

I started this post as a rehash of Modules with some additional details. However, as I started running my examples, I found out that while the documentation on modules is good, it does not tell the full story regarding exports. As I do not want to write a manifesto on module exports, I’m going to assume you already read the above document and understand Perl 6 exports. If not, go read it and I’ll wait for you to return.

Ready? Okay, let’s go.

First, let me explain why I’m on this odyssey: I am writing a module, let’s call it Prometheus::Client, and that module really needs to export some symbols to be useful. However, due to how I’ve structured things, I would might prefer that the symbols I export actually be located in other compilation units, for example, the file declaring the Prometheus::Client::Metrics module. That means I need a way to re-export the exports of another module. I’ve done this before on a small scale for some things, but this is going to be much expanded. I wanted to make sure I knew what Perl 6 was doing before I started. In the process I discovered that the exports rabbit hole is deeper than I’d originally thought.

Let’s start with this simple statement from the Modules documentation I mentioned above:

Beneath the surface, is export is adding the symbols to a UNIT scoped package in the EXPORT namespace. For example, is export(:FOO) will add the target to the UNIT::EXPORT::FOO package. This is what Perl 6 is really using to decide what to import.

This is followed by the claim that this code:

unit module MyModule;
 
sub foo is export { ... }
sub bar is export(:other) { ... }

Is the same as:

unit module MyModule;
 
my package EXPORT::DEFAULT {
    our sub foo { ... }
}
 
my package EXPORT::other {
    our sub bar { ... }
}

If I were a “fact checker” I’d have to rate the quote “is the same as” regarding these two code snippets as “Half True” at best. It does, in fact, create these packages. However, that is not all it does.

This becomes clear if you understand the implications of the Introspection section of that same document. There it shows code like this:

use URI::Escape;
say URI::Escape::EXPORT::.keys;
# OUTPUT: «(DEFAULT ALL)␤»

And this:

say URI::Escape::EXPORT::DEFAULT::.keys;
# OUTPUT: «(&uri-escape &uri-unescape &uri_escape &uri_unescape)␤» 

If you aren’t careful, you won’t see it. I didn’t until I started playing around with code to see what works. Those is export lines in that MyModule example above do not only create the UNIT scoped package. They also create an OUR scoped package inside the current package namespace that can be used for introspection.

So, if you really want to replicate what is export does internally when it calls Rakudo::Interals.EXPORT_SYMBOL, you will have to do something like this for the complete MyModule-without-is export implementation:

unit module MyModule;

my package EXPORT::DEFAULT {
    our sub foo { ... }
}

my package EXPORT::other {
    our sub bar { ... }
}

{
    # Create a package object to be MyModule::EXPORT
    my $export = Metamodel::PackageHOW.new_type(:name('EXPORT'));
    $export.^compose;

    # Create a package object to be MyModule::EXPORT::DEFAULT
    my $default = Metamodel::PackageHOW.new_type(:name('DEFAULT'));
    $default.^compose;

    # Add the &foo symbol to the introspection package
    $default.WHO<&foo> := &UNIT::EXPORT::DEFAULT::foo;

    # Create a package object to be MyModule::EXPORT::other
    my $other = Metamodel::PackageHOW.new_type(:name('other'));
    $other.^compose;

    # Add the &bar symbol to the introspection package
    $other.WHO<&bar> := &UNIT::EXPORT::other::bar;

    # Add DEFAULT and other in EXPORT
    $export.WHO<DEFAULT> := $default;
    $export.WHO<other> := $other;

    # Add EXPORT in MyModule
    MyModule.WHO<EXPORT> := $export;
}

I haven’t yet found a cleaner way to do all that extra stuff at the bottom without doing all this introspective symbol table munging. However, code like this will get you pretty close to what is export does internally. I also haven’t even delved into what an EXPORT sub does by comparison. I’ll save that for another time.

I should also mention that I came across a module in the Perl 6 ecosystem named CompUnit::Util which might be useful to me on my Prometheus::Client problems and maybe even for setting up the two EXPORT modules too. However, I haven’t really dived into that any farther than noting the age of the code and that it makes use of undocumented-but-roasted methods of Stash to do whatever it does. I may look at it later when I’m less tired. Or maybe I will just decide to do what my new library in a completely different way. Whatev'.

Cheers.

»

A year (or probably two now), I converted all of my personal services into a Kubernetes cluster. We’ve been moving toward Kubernetes at work and so I wanted to know how it worked from the inside by building a cluster myself. Part of that hosting work includes hosting a site for the homeschool coop my wife leads, a web site for a weekly “geek lunch”, and also hosting our city mayor’s campaign web site. I’m not an especially huge fan of WordPress, but it does the job as long as you’re careful with it.1

Up until now, I’ve always performed the necessary WordPress updates semi-manually. I setup a CICD system that builds my custom WordPress container image whenever WordPress releases a new patch and then sends me a text message. I then have a script I run on my laptop which installs the latest patch with a single click and then I check my WordPress sites manually to make sure they came back up. Tedious, but WordPress releases are not super often and it’s only three sites, so no biggie. However, it has been on my to-do list to setup automatic updates. The prerequisite for this, though, is monitoring so I will know when an update goes pear-shaped. As I require monitoring to succeed at another volunteer project I’ve taken on, it is now time to get that monitoring setup. Here’s what I did, in case it helps others, particularly those with micro-sized Kubernetes clusters like mine.

Setting up Prometheus

First, I based my work on the work of others. I virtually copied the work of Linux Academy’s Running Prometheus on Kubernetes, written earlier this year. It got me started in 5 minutes without really understanding anything about how Prometheus works. I will not to repeat anything you can find there.

Once this is in place, you will have a pod that will give you various statistics about your Kubernetes cluster. However, the Linux Academy article doesn’t explicitly tell you how to get at them. There are a couple options, but if you don’t intend to publish your Prometheus server through a load balancer, I suggest just using a port forward setup whenever you need to see what’s going on in Prometheus. I need to put the following into a script so I don’t have to remember it, but here’s the formula:

kubectl port-forward -n monitors deployment/prometheus-deployment 9090

(I renamed the monitoring namespace monitors for my use because reasons.) As long as that command is running, I can hit localhost port 9090 in my browser and see what Prometheus is doing.

Setting up WordPress

Okay, so now I have Prometheus monitoring Kubernetes, but that tells me nothing about WordPress yet. For that you have to understand something about Prometheus: it depends on something called an “exporter” to provide metrics to Prometheus. Basically, you need an HTTP endpoint for anything you want to monitor that will return a set of text lines describing the current state of the service.

If you look for published WordPress plugins for something to do this, you probably won’t find much. After some Googling, I came across an article on Erwin Müller’s blog titled Monitoring WordPress with Prometheus in a Kubernetes Cluster. He employs a couple different exporters, but I think one of them is redundant. I chose to just go with the second one he uses because it’s simple and I can install it straight from Github. Therefore, I forked wordpress-exporter-prometheus from origama to start. That way, I control the source code in case he decides to make some drastic change or even just remove the project from his Github account or whatever.

I added this plugin to the configuration of all of my WordPress pods and activated it in each. So now I have an endpoint in each named /wp-json/metrics that contains metrics in the format Prometheus can use. I’ve just kept it public because it’s really not too scary if someone finds out how many draft posts or total user accounts there are on these sites. However, if the metrics were secretive, I would want to add basic auth or something to them.

Then, I added the following lines to each of the service configurations for WordPress deployments in Kubernetes:

  annotations:
    prometheus.io/scrape: 'true'
    prometheus.io/port: '80'
    prometheus.io/path: '/wp-json/metrics'

Prometheus can use these annotations to automatically discover the metrics to monitor for each service. Prometheus will (with the configuration from the blog post mentioned above) import metadata related to each such service, deployment, pod, etc. using that metadata.

Setting up all the rest

At this point, I have a working Prometheus server, a plugin for reporting metrics (including the up/down signal I am most interested in), and Prometheus is collecting these metrics. But now what? I have to do something with those metrics. Now, I need to get from here to paging me when something goes awry. I pieced the rest together from reading the Prometheus Github repositories and not-so-very-nice reference docs on the Prometheus web site. However, in the end, I did find and complete the following steps to reach my goal.

To get from here to the finish we need the following:

  1. We need to setup rules to identify the metrics we are interested in signalling on.

  2. We need a way to receive the pager alerts.

  3. We need to setup Alertmanager to do the work of turning the alert signals into working pager alerts.

Writing the alert rules

The rules are part of Prometheus proper. To set these up, first you need to add something like this to your prometheus.yml configuration:

rule_files:
  - 'alerts.rules'

Rules are a way to ask Prometheus to store extra computed information about your metrics. Rules are also used to identify alerts. From here, I created my alerts.rules file like so:

groups:
- name: AppMonitors
  rules:

  - alert: CriticalDown
    expr: up{monitor_priority="critical"} == 0
    for: 5m
    labels:
      severity: critical
    annotations:
      summary: "Critical App {{ $labels.kubernetes_name }} down"
      description: "{{ $labels.kubernetes_name }} has been down for more than 5 minutes."

  - alert: ImportantDown
    expr: up{monitor_priority="important"} == 0
    for: 15m
    labels:
      severity: important
    annotations:
      summary: "Important App {{ $labels.kubernets_name }} down"
      description: "{{ $labels.kubernetes_name }} has been down for more than 15 minutes."

I am completely an amateur at this, so this is probably badly done and I actually know that something is wrong with my template variables since they are not be interpolated correctly in my alerts, but this is the gist. Despite these problems, however, it still does what I want. When something goes down, my phone nags me about it. I can fix the details later.

As an aside, you should make liberal use of labels on your services when coordinating things like Prometheus. For example, the service config for my most important WordPress sites include this label:

  labels:
    monitor_priority: critical

The alert rules above identify up metric for sites matching on these labels. If a service with these labels gets a false value, it will be added to the alert metrics and trigger an alert. After installing this configuration, I can see the status of these alerts in the Prometheus web interface. If I deliberately take a service offline, the status on the Alerts section of the web interface changes. Therefore, we now have the alerts identified.

Setting up Opsgenie

Now we’re ready for setting up how we want to receive our notifications. I am not going to use email to receive alerts. This is not 2001. My email is flooded with too much noise already and I’ll just ignore them there like I ignore 99% of my email. These alerts need to make my phone ding and annoy me until I fix them. This is where Opsgenie comes in. The two obvious picks (in my mind) were either PagerDuty or Opsgenie. However, Opsgenie has a free tier and PagerDuty does not, so Opsgenie wins. Honestly, I could probably get by with an SNS queue, but Opsgenie is easy to configure in Prometheus, so let’s go with it.

I setup a free account for myself, setup a team with myself as the sole member so I get to be the one on call all the time, and configured an integration. I copied down the API key and now I’m ready to configure Alertmanager to connect my alerts to Opsgenie.

Setting up Alertmanager

The last step is that I need a tool running in my cluster to forward the alerts from Prometheus to Opsgenie. The tool for this is called Alertmanager, which is another tool in the Prometheus ecosystem. For this, I crafted my own setup from scratch. My Kubernetes configuration for Alertmanager looks like this:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-alertmanager-conf
  labels:
    name: prometheus-alertmanager-conf
  namespace: monitors
data:
  alertmanager.yml: |
    route:
    receiver: opsgenie

    receivers:
    - name: opsgenie
    opsgenie_configs:
    - api_key: SECRET
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: alertmanager
  namespace: monitors
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: alertmanager
        software: prometheus
    spec:
      containers:
      - name: alertmanager
        image: prom/alertmanager:v0.18.0
        args:
        - "--config.file=/etc/alertmanager/alertmanager.yml"
        ports:
        - containerPort: 9093
          name: alertmanager
        volumeMounts:
        - name: prometheus-config
          mountPath: /etc/alertmanager/
      volumes:
      - name: prometheus-config
        configMap:
          defaultMode: 420
          name: prometheus-alertmanager-conf
---
apiVersion: v1
kind: Service
metadata:
  name: alertmanager
  namespace: monitors
  annotations:
    prometheus.io/scrape: 'true'
    prometheus.io/port: 8080
spec:
  selector:
    app: alertmanager
  ports:
  - port: 8080
    targetPort: alertmanager

That’s it. I loaded that into Kubernetes. The Alertmanager is now ready for me to send alerts through it. We can test it real quick by setting up a port forward:

kubectl port-forward -n monitors service/alertmanager 8080

And then send it a test alert via curl like so:

curl -H "Content-Type: application/json" -d '[{"labels":{"alertname":"Test"}}]' localhost:8080/api/v1/alerts

About 5 minutes after running that curl command, my phone dings to let me know an alert has been received. (The delay is because I’ve left all the group delay and other defaults in place for now.)

That’s pretty much it for Alertmanager. However, there’s one teensy little thing we need to do Prometheus to complete the configuration. Prometheus needs to push alerts to Alertmanager. This is done via the following configuration in in prometheus.yml:

alerting:
  alertmanagers:
  - static_configs:
    - targets:
      - 'alertmanager.monitors.svc:8080'

After all this, I can restart my Prometheus pod and when one of my WordPress sites has down time, my phone pages me within 10–20 minutes. That’s good enough for my little teeny sites. I tested by deliberately taking one of the sites offline.

Yay!

I hope that this information is useful to someone on the Internet. If not, it will end up being useful to me in 14 months when I next go to figure out what I did to set this up and what I need to remember when I next need to work with it.

Cheers.


  1. By being careful, I mean always keep WordPress and all plugins and themes patched and up to date and be very careful and conservative about which plugins and themes you install.  ↩

»

I am working on a robotics project with my 3 boys. For this project, I have designed the core components to be a Raspberry Pi, an Adafruit Circuit Playground Express (CPX), and an Adafruit Crickit Hat. The Raspberry Pi will be running firmware written by me to control the Crickit Hat. This firmware will directly drive the sensors, the motors, the servo, and other hardware. The Raspberry Pi also communicates over USB to the CPX to provide high level sensor information and receive commands. My sons will be programming the CPX micro-controllers using Microsoft MakeCode which provides a simple block programming interface for the younger boys and a TypeScript interface to the oldest one.

That’s all fine and relatively straightforward until we try to actually get things talking to each other. The first problem I ran into is that even though every board (as far as I am aware) that MakeCode works with features an on board serial-to-USB chip, MakeCode firmware opts not to use it. And yet, it supports serial communication. Instead of using USB as a serial device (I mean, USB is the Universal Serial Bus after all), they send serial data over a custom HID protocol they call HF2 (the HID Flashing Format, cute abbreviation, eh?). It supports other commands like resetting the firmware and flashing the firmware, but basically the sort of stuff that an Arduino IDE does over USB with the help of the hardware chip. This is silliness in my opinion, but whatever. I will cope.

The coping involves creating a NativeCall wrapper for Perl 6 called Device::HIDAPI, which wraps the hidapi C library. This library can be used to access non-standard HID devices to send and receive HID reports. The same library is portable across Windows, Mac, and Linux. Whee! My main laptop is a Mac and everything is working find when I communicate from my laptop to say XBox controller or the CPX for testing. Cool.

However, when I get ready to test it on Linux, either to run with Travis CI or run on the Pi, I have a new problem. The library on Mac is named libhidapi.dylib which translates to 'hidapi' in the Perl 6 NativeCall interface.

sub hidapi_init(--> int32) is native('hidapi') { * }

However, the library has a different name on Linux. In fact, it can have two possible names. On Linux, the library is named either libhidapi-libusb.so or libhidapi-hidraw.so on Linux because there are two different implementations. This means I need the code above to be effectively:

sub hidapi_init(--> int32) is native('hidapi-libusb') { * }
# OR
sub hidapi_init(--> int32) is native('hidapi-hidraw') { * }

Obviously, I can’t do that. We could start working towards something workable by going to a constant we can easily switch for all methods:

constant HIDAPI = 'hidapi';
sub hidapi_init(--> int32) is native(HIDAPI) { * }

However, I do not want to modify that constant every time I switch between Mac or Linux. That’s a nonstarter. I most certainly don’t want to tell everyone installing it from the ecosystem that they have to fetch it, edit a file, and then build and install it. I’m not a hater.

As an avowed Linux nerd and shell programming geek, the thing I really want to do is something like this:

constant HIDAPI = %*ENV<HIDAPI_LIBARARY>; # WRONG!
sub hidapi_init(--> int32) is native(HIDAPI) { * }

However, while that might look sensible at first blush it will definitely not work the way it looks like it does. What’s the problem? The is native trait will only be set at compile time (i.e., the point when zef install Device::HIDAPI is run). Yet, using an environment variable suggests that this is something that can be set on every run. It won’t be. This is bad news bears. Don’t do it.

It would be possible to use something like no precompilation to force a fresh build every time, but that means Rakudo is going to be generating the stubs and code for NativeCall and compiling the C and the MoarVM byte code every single time the library is used. That’s horrible. I don’t want to do that. I mean, really, setting an environment variable to choose a library name at runtime is a pretty ugly kludge in my opinion anyway. I won’t do it.

My solution is to add a Build.pm to the project. This is a somewhat undocumented feature for building a Perl 6 module (at least, I couldn’t find it on my last search of docs.perl.org), but it is the correct way to introduce any compile-time setup your module requires when distributed through the Perl 6 ecosystem.

Basically, a Build.pm file defines a class named Build that has a method named build which is called at build-time. (Are you seeing pattern here yet?)

use v6;

class Build {
    method build($workdir) {
       # do build-time stuff here
    }
}

The $workdir that is passed is the path to the directory in which the project is being built from. From there, your build method can modify the project in any way necessary to suit the needs of the project.

If you are familiar with autoconf, what comes next will be familiar. To handle the hidapi case, I created 3 nearly identical Perl 6 scripts that each look something like this:

use v6;
use NativeCall;
sub hidapi_init(--> int32) is native('hidapi') { * }
hidapi_init();

I have one for each potential library name. Then I have code that iterates through each name and checks to see if the code runs without error:

    constant LIBS = <hidapi hidapi-hidraw hidapi-libusb>;

    # returns the libraries that run without error
    method try-libraries($workdir --> Seq) {
        gather for LIBS -> $try-lib {
            try {
                EVALFILE "$workdir/test/$try-lib.p6";

                # if we get here, the code didn't blow up
                take $try-lib;
            }
        }
    }

Then, I select the first library found and put it into an auto-generated config package I can reference in my main code:

    method build($workdir) {
        my $lib = first self.try-library($workdir);
        mkdir "$workdir/lib/Device/HIDAPI";
        "$workdir/lib/Device/HIDAPI/Config.pm6".IO.spurt(qq:to/END_OF_CONFIG/);
            # DO NOT EDIT. This file is auto-generated.
            use v6;

            unit package Device::HIDAPI::Config;

            our constant \$HIDAPI = q[$lib];
            END_OF_CONFIG
    }

Now, whenever someone (probably mostly me) installs my module with zef install Device::HIDAPI, this build script will run, test to see which hidapi library is available, and create the configuration file. And with that, I have a pre-compiled and portable builder for my library.

Cheers.

»

For my first real post of my new blog, let’s talk about multi-stage docker builds. This blog is built with the aid of just such a build. A multi-stage docker build gives you the ability to build multiple containers. In the case of my build, it helps me build a single end-product that’s uncluttered by extra build configuration and tooling.

Why multi-stage?

Docker holds a place in the modern development pipeline for deployed applications similar to the role the classic Makefile has in building packaged applications. (There’s a reason the Dockerfile and Makefile use a similar nomenclature after all.) However, when building a deployed application, you often need to perform build tasks that create clutter. For example, if you’re building a C program, you’ll like need to install the build tools (sudo apt-get install build-essential) and then when you run make you get all those intermediate .o files needed for linking, and maybe you need to install Perl or Python to help run some of the glue code, etc.

Ideally, to save space, improve security, and just generally avoid having extra junk laying around, you want to avoid having all that flotsam and jetsam laying about in your container images. You could try to be extra diligent and uninstall those things and delete your files, but you probably won’t and there’s no guarantee you’ll really be able to get things pristine again. Furthermore, due to the multi-layered nature of a docker container image, those files you delete are still sort of there, they’ve just been hidden.

One way to rescue yourself from these problems is to use a multi-stage build in your Dockerfile. A multi-stage container helps clean-up the clutter without having to perform rm commands or apt-get remove. It also prevents carrying around deleted files in hidden layers because your final build can simply omit those layers when you use a multi-stage build.

How to multi-stage build?

A multi-stage build is really easy to do. First, make sure you have the latest version of Docker installed because this is a newer feature. (Also, security updates are thing. Why would you run an older version of Docker!?) Second, you need to have multiple FROM commands in your Dockerfile. Each FROM starts what amounts to a new container image:

FROM debian:latest AS builder

RUN apt-get install git build-essential cmake
COPY /c-src /scratch

WORKDIR /scratch
RUN make

FROM rakudo-star:latest AS release 

COPY --from=builder /scratch/cool-program /usr/bin/cool-program

COPY /p6-src /app
ENTRYPOINT [
    "perl6", 
    "/app/bin/cooler-program", 
    "--cool-program-path=/usr/bin/cool-program"
]

This Dockerfile will now build two separate container images. The second of which will contain a file built in the first. All the extra source code and junk added to build the cool-program is only in the build container and left ouf of the final. This leaves your cooler-program completely free of all that unnecessary clutter.

If you build this and push the release container to Docker hub, only the build system is going to have the parts of the image for builder. You can, if you want, push the other images in the multi-stage build, but for this example, I probably wouldn’t. Only the release is important.

This Web Site

I am write this because I wrote just such a Dockerfile to build this web site, which is built from a three-stage Dockerfile.

  1. Stage 1: Build MultiMarkdown from source.
  2. Stage 2: Install the Perl 6 code for the static site generator and all dependencies.
  3. Stage 3: Copy the built bits to the final result for deployment.

This demonstrates how to cleanly build a Perl 6 installation without keeping the original source around.

Regarding Perl 6 Builds

If you are doing this with Perl 6, I the following knowledge may save you some time: When you use zef to do your build on a rakudo-star base container, all the build files go into the directory named /usr/share/perl6/site. You can safely copy that from one Rakudo* to another Rakudo* and you will have everything you need.

Cheers.

P.S. Here’s the source for this Dockerfile as of this writing. The above example is shorter, but I haven’t actually run that one. This one should be working because I copied it straight from the project into here.

FROM debian:latest AS multimarkdown-builder

RUN apt-get update -y
RUN apt-get install -y git build-essential cmake

RUN mkdir /scratch
WORKDIR /scratch

RUN git clone --recursive https://github.com/fletcher/MultiMarkdown-6.git multimarkdown \
    && cd multimarkdown \
    && make release \
    && cd build \
    && make

FROM rakudo-star:latest AS zostay-builder

RUN zef update

COPY . /app

WORKDIR /app

RUN zef install .

FROM rakudo-star:latest AS zostay

COPY --from=multimarkdown-builder \
    /scratch/multimarkdown/build/multimarkdown \
    /usr/bin/multimarkdown

COPY --from=zostay-builder \
    /usr/share/perl6/site \
    /usr/share/perl6/site 

VOLUME /src
VOLUME /dst

ENTRYPOINT ["/usr/share/perl6/site/bin/zostay"]
CMD ["build-loop", "/src", "/dst"]

»

So here’s the story. For most of my adult life I have had a blog of some kind. I like to write. It’s a great way to blow off steam. I also like to brag about things I’m doing or learning about. It’s a pride issue, but I like to think I do interesting things others would be interested in too.

My last blog atrophied as such things do. My posts had gotten short enough to be in Facebook post range, so I’ve been doing just that then. However, Facebook doesn’t really have the right audience for the writing I want to do at this time, so I’m starting a blog up again.

This blog is going to be laser-focused on the various O(fun) projects I’m working on. These will generally involve programming in Perl 6, electronics, micro-controllers, and 3D printing. It may also involve things about home improvement, programming in other languages, or other technical type things I’m learning about and find interesting.

In the past, I have often blogged about philosophy, religion, and politics, but the current cultural environment tends to erase my point of view on these things. A public blog on the Internet is no longer a safe space for me to have such discussions.

For the time being, there will be no comments section, but if you want to see one added, tell me in the comments. Hur hur. (I may experiment with adding comments at some point, but I’m more likely to start a subreddit or encourage discussion on Twitter or elsewhere instead.)

Anyway, that’s the intro.

Cheers.