What is a JSON feed? Learn more

JSON Feed Viewer

Browse through the showcased feeds, or enter a feed URL below.

Now supporting RSS and Atom feeds thanks to Andrew Chilton's feed2json.org service

CURRENT FEED

Christine Dodrill's Blog

My blog posts and rants about various technology things.

A feed by Christine Dodrill

JSON


New PGP Key Fingerprint

Permalink - Posted on 2021-01-15 00:00

New PGP Key Fingerprint

This morning I got an encrypted email, and in the process of trying to decrypt it I discovered that I had lost my PGP key. I have no idea how I lost it. As such, I have created a new PGP key and replaced the one on my website with it. I did the replacement in this commit, which you can see is verified with a subkey of my new key.

My new PGP key ID is 803C 935A E118 A224. The key with the ID 799F 9134 8118 1111 should not be used anymore. Here are all the subkey fingerprints:

Signature key ....: 378E BFC6 3D79 B49D 8C36  448C 803C 935A E118 A224
      created ....: 2021-01-15 13:04:28
Encryption key....: 8C61 7F30 F331 D21B 5517  6478 8C5C 9BC7 0FC2 511E
      created ....: 2021-01-15 13:04:28
Authentication key: 7BF7 E531 ABA3 7F77 FD17  8F72 CE17 781B F55D E945
      created ....: 2021-01-15 13:06:20
General key info..: pub  rsa2048/803C935AE118A224 2021-01-15 Christine Dodrill (Yubikey) <me@christine.website>
sec>  rsa2048/803C935AE118A224  created: 2021-01-15  expires: 2031-01-13
                                card-no: 0006 03646872
ssb>  rsa2048/8C5C9BC70FC2511E  created: 2021-01-15  expires: 2031-01-13
                                card-no: 0006 03646872
ssb>  rsa2048/CE17781BF55DE945  created: 2021-01-15  expires: 2031-01-13
                                card-no: 0006 03646872

I don't really know what the proper way is to go about revoking an old PGP key. It probably doesn't help that I don't use PGP very often. I think this is the first encrypted email I've gotten in a year.

Let's hope that I don't lose this key as easily!


Site Update: RSS Bandwidth Fixes

Permalink - Posted on 2021-01-14 00:00

Site Update: RSS Bandwidth Fixes

Well, so I think I found out where my Kubernetes cluster cost came from. For context, this blog gets a lot of traffic. Since the last deploy, my blog has served its RSS feed over 19,000 times. I have some pretty naiive code powering the RSS feed. It basically looked something like this:

  • Write RSS feed content-type and beginning of feed
  • For every post I have ever made, include its metadata and content
  • Write end of RSS feed

This code was fantastically simple to develop, however it was very expensive in terms of bandwidth. When you add all this up, my RSS feed used to be more than a one megabyte response. It was also only getting larger as I posted more content.

This is unsustainable, so I have taken multiple actions to try and fix this from several angles.

Mara is hacker

Mara

Yes, that graph is showing in gigabytes. We're so lucky that bandwidth is free on Hetzner.

First I finally set up the site to run behind Cloudflare. The Cloudflare settings are set very permissively, so your RSS feed reading bots or whatever should NOT be affected by this change. If you run into any side effects as a result of this change, contact me and I can fix it.

Second, I also now set cache control headers on every response. By default the "static" pages are cached for a day and the "dynamic" pages are cached for 5 minutes. This should allow new posts to show up quickly as they have previously.

Thirdly, I set up ETags for the feeds. Each of my feeds will send an ETag in a response header. Please use this tag in future requests to ensure that you don't ask for content you already have. From what I recall most RSS readers should already support this, however I'll monitor the situation as reality demands.

Lastly, I adjusted the ttl of the RSS feed so that compliant feed readers should only check once per day. I've seen some feed readers request the feed up to every 5 minutes, which is very excessive. Hopefully this setting will gently nudge them into behaving.

As a nice side effect I should have slightly lower ram usage on the blog server too! Right now it's sitting at about 58 and a half MB of ram, however with fewer copies of my posts sitting in memory this should fall by a significant amount.

If you have any feedback about this, please contact me or mention me on Twitter. I read my email frequently and am notified about Twitter mentions very quickly.


How to Set Up Borg Backup on NixOS

Permalink - Posted on 2021-01-09 00:00

How to Set Up Borg Backup on NixOS

Borg Backup is a encrypted, compressed, deduplicated backup program for multiple platforms including Linux. This combined with the NixOS options for configuring Borg Backup allows you to backup on a schedule and restore from those backups when you need to.

Borg Backup works with local files, remote servers and there are even cloud hosts that specialize in hosting your backups. In this post we will cover how to set up a backup job on a server using BorgBase's free tier to host the backup files.

Setup

You will need a few things:

  • A free BorgBase account
  • A server running NixOS
  • A list of folders to back up
  • A list of folders to NOT back up

First, we will need to create a SSH key for root to use when connecting to BorgBase. Open a shell as root on the server and make a borgbackup folder in root's home directory:

mkdir borgbackup
cd borgbackup

Then create a SSH key that will be used to connect to BorgBase:

ssh-keygen -f ssh_key -t ed25519 -C "Borg Backup"

Ignore the SSH key password because at this time the automated Borg Backup job doesn't allow the use of password-protected SSH keys.

Now we need to create an encryption passphrase for the backup repository. Run this command to generate one using xkcdpass:

nix-shell -p python39Packages.xkcdpass --run 'xkcdpass -n 12' > passphrase

Mara is hacker

Mara

You can do whatever you want to generate a suitable passphrase, however xkcdpass is proven to be more random than most other password generators.

BorgBase Setup

Now that we have the basic requirements out of the way, let's configure BorgBase to use that SSH key. In the BorgBase UI click on the Account tab in the upper right and open the SSH key management window. Click on Add Key and paste in the contents of ./ssh_key.pub. Name it after the hostname of the server you are working on. Click Add Key and then go back to the Repositories tab in the upper right.

Click New Repo and name it after the hostname of the server you are working on. Select the key you just created to have full access. Choose the region of the backup volume and then click Add Repository.

On the main page copy the repository path with the copy icon next to your repository in the list. You will need this below. Attempt to SSH into the backup repo in order to have ssh recognize the server's host key:

ssh -i ./ssh_key o6h6zl22@o6h6zl22.repo.borgbase.com

Then accept the host key and press control-c to terminate the SSH connection.

NixOS Configuration

In your configuration.nix file, add the following block:

services.borgbackup.jobs."borgbase" = {
  paths = [
    "/var/lib"
    "/srv"
    "/home"
  ];
  exclude = [
    # very large paths
    "/var/lib/docker"
    "/var/lib/systemd"
    "/var/lib/libvirt"
    
    # temporary files created by cargo and `go build`
    "**/target"
    "/home/*/go/bin"
    "/home/*/go/pkg"
  ];
  repo = "o6h6zl22@o6h6zl22.repo.borgbase.com:repo";
  encryption = {
    mode = "repokey-blake2";
    passCommand = "cat /root/borgbackup/passphrase";
  };
  environment.BORG_RSH = "ssh -i /root/borgbackup/ssh_key";
  compression = "auto,lzma";
  startAt = "daily";
};

Customize the paths and exclude lists to your needs. Once you are satisfied, rebuild your NixOS system using nixos-rebuild:

nixos-rebuild switch

And then you can fire off an initial backup job with this command:

systemctl start borgbackup-job-borgbase.service

Monitor the job with this command:

journalctl -fu borgbackup-job-borgbase.service

The first backup job will always take the longest to run. Every incremental backup after that will get smaller and smaller. By default, the system will create new backup snapshots every night at midnight local time.

Restoring Files

To restore files, first figure out when you want to restore the files from. NixOS includes a wrapper script for each Borg job you define. you can mount your backup archive using this command:

mkdir mount
borg-job-borgbase mount o6h6zl22@o6h6zl22.repo.borgbase.com:repo ./mount

Then you can explore the backup (and with it each incremental snapshot) to your heart's content and copy files out manually. You can look through each folder and copy out what you need.

When you are done you can unmount it with this command:

borg-job-borgbase umount /root/borgbase/mount

And that's it! You can get more fancy with nixops using a setup like this. In general though, you can get away with this setup. It may be a good idea to copy down the encryption passphrase onto paper and put it in a safe space like a safety deposit box.

For more information about Borg Backup on NixOS, see the relevant chapter of the NixOS manual or the list of borgbackup options that you can pick from.

I hope this is able to help.


hlang in 30 Seconds

Permalink - Posted on 2021-01-04 00:00

hlang in 30 Seconds

hlang (the h language) is a revolutionary new use of WebAssembly that enables single-paridigm programming without any pesky state or memory accessing. The simplest program you can use in hlang is the h world program:

h

When run in the hlang playground, you can see its output:

h

To get more output, separate multiple h's by spaces:

h h h h

This returns:

h
h
h
h

Internationalization

For internationalization concerns, hlang also supports the Lojbanic h '. You can mix h and ' to your heart's content:

' h '

This returns:

'
h
'

Finally an easy solution to your pesky Lojban internationalization problems!

Errors

For maximum understandability, compiler errors are provided in Lojban. For example this error tells you that you have an invalid character at the first character of the string:

h: gentoldra fi'o zvati fe li no

Here is an interlinear gloss of that error:

h: gentoldra     fi'o zvati  fe           li         no
   grammar-wrong existing-at second-place use-number 0

And now you are fully fluent in hlang, the most exciting programming language since sliced bread.


</kubernetes>

Permalink - Posted on 2021-01-03 00:00

</kubernetes>

Well, since I posted that last post I have had an adventure. A good friend pointed out a server host that I had missed when I was looking for other places to use, and now I have migrated my blog to this new server. As of yesterday, I now run my website on a dedicated server in Finland. Here is the story of my journey to migrate 6 years of cruft and technical debt to this new server.

Let's talk about this goliath of a server. This server is an AX41 from Hetzner. It has 64 GB of ram, a 512 GB nvme drive, 3 2 TB drives, and a Ryzen 3600. For all practical concerns, this beast is beyond overkill and rivals my workstation tower in everything but the GPU power. I have named it lufta, which is the word for feather in L'ewa.

Assimilation

For my server setup process, the first step it to assimilate it. In this step I get a base NixOS install on it somehow. Since I was using Hetzner, I was able to boot into a NixOS install image using the process documented here. Then I decided that it would also be cool to have this server use zfs as its filesystem to take advantage of its legendary subvolume and snapshotting features.

So I wrote up a bootstrap system definition like the Hetzner tutorial said and ended up with hosts/lufta/bootstrap.nix:

{ pkgs, ... }:

{
  services.openssh.enable = true;
  users.users.root.openssh.authorizedKeys.keys = [
    "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPg9gYKVglnO2HQodSJt4z4mNrUSUiyJQ7b+J798bwD9 cadey@shachi"
  ];

  networking.usePredictableInterfaceNames = false;
  systemd.network = {
    enable = true;
    networks."eth0".extraConfig = ''
      [Match]
      Name = eth0
      [Network]
      # Add your own assigned ipv6 subnet here here!
      Address = 2a01:4f9:3a:1a1c::/64
      Gateway = fe80::1
      # optionally you can do the same for ipv4 and disable DHCP (networking.dhcpcd.enable = false;)
      Address =  135.181.162.99/26
      Gateway = 135.181.162.65
    '';
  };

  boot.supportedFilesystems = [ "zfs" ];

  environment.systemPackages = with pkgs; [ wget vim zfs ];
}

Then I fired up the kexec tarball and waited for the server to boot into a NixOS live environment. A few minutes later I was in. I started formatting the drives according to the NixOS install guide with one major difference: I added a /boot ext4 partition on the SSD. This allows me to have the system root device on zfs. I added the disks to a raidz1 pool and created a few volumes. I also added the SSD as a log device so I get SSD caching.

From there I installed NixOS as normal and rebooted the server. It booted normally. I had a shiny new NixOS server in the cloud! I noticed that the server had booted into NixOS unstable as opposed to NixOS 20.09 like my other nodes. I thought "ah, well, that probably isn't a problem" and continued to the configuration step.

Mara is hmm

Mara

That's ominous...

Configuration

Now that the server was assimilated and I could SSH into it, the next step was to configure it to run my services. While I was waiting for Hetzner to provision my server I ported a bunch of my services over to Nixops services a-la this post in this folder of my configs repo.

Now that I had them, it was time to add this server to my Nixops setup. So I opened the nixops definition folder and added the metadata for lufta. Then I added it to my Nixops deployment with this command:

$ nixops modify -d hexagone -n hexagone *.nix

Then I copied over the autogenerated config from lufta's /etc/nixos/ folder into hosts/lufta and ran a nixops deploy to add some other base configuration.

Migration

Once that was done, I started enabling my services and pushing configs to test them. After I got to a point where I thought things would work I opened up the Kubernetes console and started deleting deployments on my kubernetes cluster as I felt "safe" to migrate them over. Then I saw the deployments come back. I deleted them again and they came back again.

Oh, right. I enabled that one Kubernetes service that made it intentionally hard to delete deployments. One clever set of scale-downs and kills later and I was able to kill things with wild abandon.

I copied over the gitea data with rsync running in the kubernetes deployment. Then I killed the gitea deployment, updated DNS and reran a whole bunch of gitea jobs to resanify the environment. I did a test clone on a few of my repos and then I deleted the gitea volume from DigitalOcean.

Moving over the other deployments from Kubernetes into NixOS services was somewhat easy, however I did need to repackage a bunch of my programs and static sites for NixOS. I made the pkgs tree a bit more fleshed out to compensate.

Mara is hacker

Mara

Okay, packaging static sites in NixOS is beyond overkill, however a lot of them need some annoyingly complicated build steps and throwing it all into Nix means that we can make them reproducible and use one build system to rule them all. Not to mention that when I need to upgrade the system, everything will rebuild with new system libraries to avoid the Docker bitrot problem.

Reboot Test

After a significant portion of the services were moved over, I decided it was time to do the reboot test. I ran the reboot command and then...nothing. My continuous ping test was timing out. My phone was blowing up with downtime messages from NodePing. Yep, I messed something up.

I was able to boot the server back into a NixOS recovery environment using the kexec trick, and from there I was able to prove the following:

  • The zfs setup is healthy
  • I can read some of the data I migrated over
  • I can unmount and remount the ZFS volumes repeatedly

I was confused. This shouldn't be happening. After half an hour of troubleshooting, I gave in and ordered an IPKVM to be installed in my server.

Once that was set up (and I managed to trick MacOS into letting me boot a .jnlp web start file), I rebooted the server so I could see what error I was getting on boot. I missed it the first time around, but on the second time I was able to capture this screenshot:

The error I was looking for

Then it hit me. I did the install on NixOS unstable. My other servers use NixOS 20.09. I had downgraded zfs and the older version of zfs couldn't mount the volume created by the newer version of zfs in read/write mode. One more trip to the recovery environment later to install NixOS unstable in a new generation.

Then I switched my tower's default NixOS channel to the unstable channel and ran nixops deploy to reactivate my services. After the NodePing uptime notifications came in, I ran the reboot test again while looking at the console output to be sure.

It booted. It worked. I had a stable setup. Then I reconnected to IRC and passed out.

Services Migrated

Here is a list of all of the services I have migrated over from my old dedicated server, my kubernetes cluster and my dokku server:

Doing this migration is a bit of an archaeology project as well. I was continuously discovering services that I had littered over my machines with very poorly documented requirements and configuration. I hope that this move will let the next time I do this kind of migration be a lot easier by comparison.

I still have a few other services to move over, however the ones that are left are much more annoying to set up properly. I'm going to get to deprovision 5 servers in this migration and as a result get this stupidly powerful goliath of a server to do whatever I want with and I also get to cut my monthly server costs by over half.

I am very close to being able to turn off the Kubernetes cluster and use NixOS for everything. A few services that are still on the Kubernetes cluster are resistant to being nixified, so I may have to use the Docker containers for that. I was hoping to be able to cut out Docker entirely, however we don't seem to be that lucky yet.

Sure, there is some added latency with the server being in Europe instead of Montreal, however if this ever becomes a practical issue I can always launch a cheap DigitalOcean VPS in Toronto to act as a DNS server for my WireGuard setup.

Either way, I am now off Kubernetes for my highest traffic services. If services of mine need to use the disk, they can now just use the disk. If I really care about the data, I can add the service folders to the list of paths to back up to rsync.net (I have a post about how this backup process works in the drafting stage) via borgbackup.

Let's hope it stays online!


Many thanks to Graham Christensen, Dave Anderson and everyone else who has been helping me along this journey. I would be lost without them.


Kubernetes Pondering

Permalink - Posted on 2020-12-31 00:00

Kubernetes Pondering

Right now I am using a freight train to mail a letter when it comes to hosting my web applications. If you are reading this post on the day it comes out, then you are connected to one of a few replicas of my site code running across at least 3 machines in my Kubernetes cluster. This certainly works, however it is not very ergonomic and ends up being quite expensive.

I think I made a mistake when I decided to put my cards into Kubernetes for my personal setup. It made sense at the time (I was trying to learn Kubernetes and I am cursed into learning by doing), however I don't think it is really the best choice available for my needs. I am not a large company. I am a single person making things that are really targeted for myself. I would like to replace this setup with something more at my scale. Here are a few options I have been exploring combined with their pros and cons.

Here are the services I currently host on my Kubernetes cluster:

My goal in evaluating other options is to reduce cost and complexity. Kubernetes is a very complicated system and requires a lot of hand-holding and rejiggering to make it do what you want. NixOS, on the other hand, is a lot simpler overall and I would like to use it for running my services where I can.

Cost is a huge factor in this. My Kubernetes setup is a money pit. I want to prioritize cost reduction as much as possible.

Option 1: Do Nothing

I could do nothing about this and eat the complexity as a cost of having this website and those other services online. However over the year or so I've been using Kubernetes I've had to do a lot of hacking at it to get it to do what I want.

I set up the cluster using Terraform and Helm 2. Helm 3 is the current (backwards-incompatible) release, and all of the things that are managed by Helm 2 have resisted being upgraded to Helm 3.

I'm going to say something slightly controversial here, but YAML is a HORRIBLE format for configuration. I can't trust myself to write unambiguous YAML. I have to reference the spec constantly to make sure I don't have an accidental Norway/Ontario bug. I have a Dhall package that takes away most of the pain, however it's not flexible enough to describe the entire scope of what my services need to do (IE: pinging Google/Bing to update their indexes on each deploy), and I don't feel like putting in the time to make it that flexible.

Mara is hacker

Mara

This is the regex for determining what is a valid boolean value in YAML: y|Y|yes|Yes|YES|n|N|no|No|NO|true|True|TRUE|false|False|FALSE|on|On|ON|off|Off|OFF. This can bite you eventually. See the Norway Problem for more information.

I have a tor hidden service endpoint for a few of my services. I have to use an unmaintained tool to manage these on Kubernetes. It works today, but the Kubernetes operator API could change at any time (or the API this uses could be deprecated and removed without much warning) and leave me in the dust.

I could live with all of this, however I don't really think it's the best idea going forward. There's a bunch of services that I added on top of Kubernetes that are dangerous to upgrade and very difficult (if not impossible) to downgrade when something goes wrong during the upgrade.

One of the big things that I have with this setup that I would have to rebuild in NixOS is the continuous deployment setup. However I've done that before and it wouldn't really be that much of an issue to do it again.

NixOS fixes all the jank I mentioned above by making my specifications not have to include the version numbers of everything the system already provides. You can actually trust the package repos to have up to date packages. I don't have to go around and bump the versions of shims and pray they work, because with NixOS I don't need them anymore.

Option 2: NixOS on top of SoYouStart or Kimsufi

This is a doable option. The main problem here would be doing the provision step. SoYouStart and Kimsufi (both are offshoot/discount brands of OVH) have very little in terms of customization of machine config. They work best when you are using "normal" distributions like Ubuntu or CentOS and leave them be. I would want to run NixOS on it and would have to do several trial and error runs with a tool such as nixos-infect to assimilate the server into running NixOS.

With this option I would get the most storage out of any other option by far. 4 TB is a lot of space. However, SoYouStart and Kimsufi run decade-old hardware at best. I would end up paying a lot for very little in the CPU department. For most things I am sure this would be fine, however some of my services can have CPU needs that might exceed what second-generation Xeons can provide.

SoYouStart and Kimsufi have weird kernel versions though. The last SoYouStart dedi I used ran Fedora and was gimped with a grsec kernel by default. I had to end up writing this gem of a systemd service on boot which did a kexec to boot into a non-gimped kernel on boot. It was a huge hack and somehow worked every time. I was still afraid to reboot the machine though.

Sure is a lot of ram for the cost though.

Option 3: NixOS on top of Digital Ocean

This shares most of the problems as the SoYouStart or Kimsufi nodes. However, nixos-infect is known to have a higher success rate on Digital Ocean droplets. It would be really nice if Digital Ocean let you upload arbitrary ISO files and go from there, but that is apparently not the world we live in.

8 GB of ram would be way more than enough for what I am doing with these services.

Option 4: NixOS on top of Vultr

Vultr is probably my top pick for this. You can upload an arbitrary ISO file, kick off your VPS from it and install it like normal. I have a little shell server shared between some friends built on top of such a Vultr node. It works beautifully.

The fact that it has the same cost as the Digital Ocean droplet just adds to the perfection of this option.

Costs

Here is the cost table I've drawn up while comparing these options:

Option Ram Disk Cost per month Hacks
Do nothing 6 GB (4 GB usable) Not really usable, volumes cost extra $60/month Very Yes
SoYouStart 32 GB 2x2TB SAS $40/month Yes
Kimsufi 32 GB 2x2TB SAS $35/month Yes
Digital Ocean 8 GB 160 GB SSD $40/month On provision
Vultr 8 GB 160 GB SSD $40/month No

I think I am going to go with the Vultr option. I will need to modernize some of my services to support being deployed in NixOS in order to do this, however I think that I will end up creating a more robust setup in the process. At least I will create a setup that allows me to more easily maintain my own backups rather than just relying on DigitalOcean snapshots and praying like I do with the Kubernetes setup.

Thanks farcaller, Marbles, John Rinehart and others for reviewing this post prior to it being published.


Mara: Sh0rk of Justice: Version 1.0.0 Released

Permalink - Posted on 2020-12-28 00:00

Mara: Sh0rk of Justice: Version 1.0.0 Released

Over the long weekend I found out about a program called GB Studio. It's a simple drag-and-drop interface that you can use to make homebrew games for the Nintendo Game Boy. I was intrigued and I had some time, so I set out to make a little top-down adventure game. After a few days of tinkering I came up with an idea and created Mara: Sh0rk of Justice.

Mara is hacker

Mara

You made a game about me? :D

Guide Mara through the spooky dungeon in order to find all of its secrets. Seek out the secrets of the spooks! Defeat the evil mage! Solve the puzzles! Find the items of power! It's up you to save us all, Mara!

You can play it in an <iframe> on itch.io!

Things I Learned

Game development is hard. Even with tools that help you do it, there's a limit to how much you can get done at once. Everything links together. You really need to test things both in isolation and as a cohesive whole.

I cannot compose music to save my life. I used free-to-use music assets from the GB Studio Community Assets pack to make this game. I think I managed to get everything acceptable.

GB Studio is rather inflexible. It feels like it's there to really help you get started from a template. Even though you can make the whole game from inside GB Studio, I probably should have ejected the engine to source code so I could customize some things like the jump button being weird in platforming sections.

Pixel art is an art of its own. I used a lot of free to use assets from itch.io for the tileset and a few NPC's. The rest was created myself using Aseprite. Getting Mara's walking animation to a point that I thought was acceptable was a chore. I found a nice compromise though.


Overall I'm happy with the result as a whole. Try it out, see how you like it and please do let me know what I can improve on for the future.


The Source Version 1.0.0 Release

Permalink - Posted on 2020-12-25 00:00

The Source Version 1.0.0 Release

After hours of work and adjustment, I have finally finished version 1 of my tabletop roleplaying game The Source. It is available on itch.io with an added 50% discount for readers of my blog. This discount will only last for the next two weeks.

Patrons (of any price tier) can claim a free copy here. Your support gives me so much.

Merry christmas all.


The 7th Edition

Permalink - Posted on 2020-12-19 00:00

The 7th Edition

You know what, fuck rules. Fuck systems. Fuck limitations. Let's dial the tabletop RPG system down to its roots. Let's throw out every stat but one: Awesomeness. When you try to do something that could fail, roll for Awesomeness. If your roll is more than your awesomeness stat, you win. If not, you lose. If you are or have something that would benefit you in that situation, roll for awesomeness twice and take the higher value.

No stats.
No counts.
No limits.
No gods.
No masters.
Just you and me and nature in the battlefield.

  • Want to shoot an arrow? Roll for awesomeness. You failed? You're out of ammo.
  • Want to, defeat a goblin but you have a goblin-slaying-broadsword? Roll twice for awesomeness and take the higher value. You got a 20? That goblin was obliterated. Good job.
  • Want to pick up an item into your inventory? Roll for awesomeness. You got it? It's in your inventory.

Etc. Don't think too hard. Let a roll of the dice decide if you are unsure.

Base Awesomeness Stats

Here are some probably balanced awesomeness base stats depending on what kind of dice you are using:

  • 6-sided: 4 or 5
  • 8-sided: 5 or 6
  • 10-sided: 6 or 7
  • 12-sided: 7 or 8
  • 20-sided: anywhere from 11-13

Character Sheet Template

Here's an example character sheet:

Name:
Awesomeness:
Race:
Class:
Inventory:
 * 

That's it. You don't even need the race or class if you don't want to have it.

You can add more if you feel it is relevant for your character. If your character is a street brat that has experience with haggling, then fuck it be the most street brattiest haggler you can. Try to not overload your sheet with information, this game is supposed to be simple. A sentence or two at most is good.

One Player is The World

The World is a character that other systems would call the Narrator, the Pathfinder, Dungeon Master or similar. Let's strip this down to the core of the matter. One player doesn't just dictate the world, they are the world.

The World also controls the monsters and non-player characters. In general, if you are in doubt as to who should roll for an event, The World does that roll.

Mixins/Mods

These are things you can do to make the base game even more tailored to your group. Whether you should do this is highly variable to the needs and whims of your group in particular.

Mixin: Adjustable Awesomeness

So, one problem that could come up with this is that bad luck could make this not as fun. As a result, add these two rules in:

  • Every time you roll above your awesomeness, add 1 to your awesomeness stat
  • Every time you roll below your awesomeness, remove 1 from your awesomeness stat

This should add up so that luck would even out over time. Players that have less luck than usual will eventually get their awesomeness evened out so that luck will be in their favor.

Mixin: No Awesomeness

In this mod, rip out Awesomeness altogether. When two parties are at odds, they both roll dice. The one that rolls higher gets what they want. If they tie, both people get a little part of what they want. For extra fun do this with six-sided dice.

  • Monster wants to attack a player? The World and that player roll. If the player wins, they can choose to counterattack. If the monster wins, they do a wound or something.
  • One player wants to steal from another? Have them both roll to see what happens.

Use your imagination! Ask others if you are unsure!

Other Advice

This is not essential but it may help.

Monster Building

Okay so basically monsters fall into two categories: peons and bosses. Peons should be easy to defeat, usually requiring one action. Bosses may require more and might require more than pure damage to defeat. Get clever. Maybe require the players to drop a chandelier on the boss. Use the environment.

In general, peons should have a very high base awesomeness in order to do things they want. Bosses can vary based on your mood.

Adjustable awesomeness should affect monsters too.

Worldbuilding

Take a setting from somewhere and roll with it. You want to do a cyberpunk jaunt in Night City with a sword-wielding warlock, a succubus space marine, a bard netrunner and a shapeshifting monk? Do the hell out of that. That sounds awesome.

Don't worry about accuracy or the like. You are setting out to have fun.

Special Thanks

Special thanks goes to Jared, who sent out this tweet that inspired this document. In case the tweet gets deleted, here's what it said:

heres a d&d for you

you have one stat, its a saving throw. if you need to roll dice, you roll your save.

you have a class and some equipment and junk. if the thing you need to roll dice for is relevant to your class or equipment or whatever, roll your save with advantage.

oh your Save is 5 or something. if you do something awesome, raise your save by 1.

no hp, save vs death. no damage, save vs goblin. no tracking arrows, save vs running out of ammo.

thanks to @Axes_N_Orcs for this

What's So Cool About Save vs Death?

can you carry all that treasure and equipment? save vs gains

I replied:

Can you get more minimal than this?

He replied:

when two or more parties are at odds, all roll dice. highest result gets what they want.

hows that?

This document is really just this twitter exchange in more words so that people less familiar with tabletop games can understand it more easily. You know you have finished when there is nothing left to remove, not when you can add something to "fix" it.

I might put this on my itch.io page.


Plea to Twitter

Permalink - Posted on 2020-12-14 00:00

NOTE: This is a very different kind of post compared to what I usually write. If you or anyone you know works at Twitter, please link this to them. I am in a unique situation and the normal account recovery means do not work. If you work at Twitter and are reading this, my case number is [redacted].

EDIT(19:51 M12 14 2020): My account is back. Thank you anonymous Twitter support people. For everyone else, please take this as an example of how NOT to handle account issues. The fact that I had to complain loudly on Twitter to get this weird edge case taken care of is ludicrous. I'd gladly pay Twitter just to have a support mechanism that gets me an actual human without having to complain on Twitter.

Plea to Twitter

On Sunday, December 13, 2020, I noticed that I was locked out of my Twitter account. If you go to @theprincessxena today, you will see that the account is locked out for "unusual activity". I don't know what I did to cause this to happen (though I have a few theories) and I hope to explain them in the headings below. I have gotten no emails or contact from Twitter about this yet. I have a backup account at @CadeyRatio as a stopgap. I am also on mastodon as @cadey@mst3k.interlinked.me.

In place of my tweeting about quarantine life, I am writing about my experiences here.

Why I Can't Unlock My Account

I can't unlock my account the normal way because I forgot to set up two factor authentication and I also forgot to change the phone number registered with the account to my Canadian one when I moved to Canada. I remembered to do this change for all of the other accounts I use regularly except for my Twitter account.

In order to stop having to pay T-Mobile $70 per month, I transferred my phone number to Twilio. This combined with some clever code allowed me to gracefully migrate to my new Canadian number. Unfortunately, Twitter flat-out refuses to send authentication codes to Twilio numbers. It's probably to prevent spam, but it would be nice if there was an option to get the authentication code over a phone call.

Theory 1: International Travel

Recently I needed to travel internationally in order to start my new job at Tailscale. Due to an unfortunate series of events over two months, I needed to actually travel internationally to get a new visa. This lead me to take a very boring trip to Minnesota for a week.

During that trip, I tweeted and fleeted about my travels. I took pictures and was in my hotel room a lot.

Mara is hacker

Mara

We can't dig up the link for obvious reasons, but one person said they were always able to tell when we are traveling because it turns the twitter account into a fast food blog.

I think Twitter may have locked out my account because I was suddenly in Minnesota after being in Canada for almost a year.

Theory 2: Misbehaving API Client

I use mi as part of my new blogpost announcement pipeline. One of the things mi does is submits new blogposts and some metadata about them to Twitter. I haven't been able to find any logs to confirm this, but if something messed up in a place that was unlogged somehow, it could have triggered some kind of anti-abuse pipeline.

Theory 3: NixOS Screenshot Set Off Some Bad Thing

One of my recent tweets that I can't find anymore is a tweet about a NixOS screenshot for my work machine. I think that some part of the algorithm somewhere really hated it, and thus triggered the account lock. I don't really understand how a screenshot of KDE 5 showing neofetch output could make my account get locked, but with enough distributed machine learning anything can happen.

Theory 4: My Password Got Cracked

I used a random password generated with iCloud for my Twitter password. Theoretically this could have been broken, but I doubt it.


Overall, I just want to be able to tweet again. Please spread this around for reach. I don't like using my blog to reach out like this, but I've been unable to find anyone that knows someone at Twitter so far and I feel this is the best way to broadcast it. I'll update this post with the resolution to this problem when I get one.

I think the International Travel theory is the most likely scenario. I just want a human to see this situation and help fix it.


Trisiel Update

Permalink - Posted on 2020-12-04 00:00

Trisiel Update

The project I formerly called wasmcloud has now been renamed to Trisiel after the discovery of a name conflict. The main domain for Trisiel is now https://trisiel.com to avoid any confusions between our two projects.

Planning for implementing and hosting Trisiel is still in progress. I will give more updates as they are ready to be released. To get more up to the minute information please follow the twitter account @trisielcloud, I will be posting there as I have more information.

I am limitless. There is no cage or constraint that can corral me into one constant place. I am limitless. I can change, shift, overcome, transform, because I am not bound to a thing that serves me, and my body serves me.

Quantusum, James Mahu


Site Update: WebMention Support

Permalink - Posted on 2020-12-02 00:00

Site Update: WebMention Support

Recently in my Various Updates post I announced that my website had gotten WebMention support. Today I implemented WebMention integration into blog articles, allowing you to see where my articles are mentioned across the internet. This will not work with every single mention of my site, but if your publishing platform supports sending WebMentions, then you will see them show up on the next deploy of my site.

Thanks to the work of the folks at Bridgy, I have been able to also keep track of mentions of my content across Twitter, Reddit and Mastodon. My WebMention service will also attempt to resolve Bridgy mention links to their original sources as much as it can. Hopefully this should allow you to post my articles as normal across those networks and have those mentions be recorded without having to do anything else.

As I mentioned before, this is implemented on top of mi. mi receives mentions sent to https://mi.within.website/api/webmention/accept and will return a reference URL in the Location header. This will return JSON-formatted data about the mention. Here is an example:

$ curl https://mi.within.website/api/webmention/01ERGGEG7DCKRH3R7DH4BXZ6R9 | jq
{
  "id": "01ERGGEG7DCKRH3R7DH4BXZ6R9",
  "source_url": "https://maya.land/responses/2020/12/01/i-think-this-blog-post-might-have-been.html",
  "target_url": "https://christine.website/blog/toast-sandwich-recipe-2019-12-02",
  "title": null
}

This is all of the information I store about each WebMention. I am working on title detection (using the readability algorithm), however I am unable to run JavaScript on my scraper server. Content that is JavaScript only may not be able to be scraped like this.


Many thanks to Chris Aldrich for inspiring me to push this feature to the end. Any articles that don't have any WebMentions yet will link to the WebMention spec.

Be well.


Discord Webhooks via NixOS and Systemd Timers

Permalink - Posted on 2020-11-30 00:00

Discord Webhooks via NixOS and Systemd Timers

Recently I needed to set up a Discord message on a cronjob as a part of moderating a guild I've been in for years. I've done this before using cronjobs, however this time we will be using NixOS and systemd timers. Here's what you will need to follow along:

  • A machine running NixOS
  • A Discord account
  • A webhook configured for a channel
  • A message you want to send to Discord

Mara is hacker

Mara

If you don't have moderation permissions in any guilds, make your own for testing! You will need the "Manage Webhooks" permission to create a webhook.

Setting Up Timers

systemd timers are like cronjobs, except they trigger systemd services instead of shell commands. For this example, let's create a daily webhook reminder to check on your Animal Crossing island at 9 am.

Let's create the systemd service at the end of the machine's configuration.nix:

systemd.services.acnh-island-check-reminder = {
  serviceConfig.Type = "oneshot";
  script = ''
    MESSAGE="It's time to check on your island! Check those stonks!"
    WEBHOOK="${builtins.readFile /home/cadey/prefix/secrets/acnh-webhook-secret}"
    USERNAME="Domo"
    
    ${pkgs.curl}/bin/curl \
      -X POST \
      -F "content=$MESSAGE" \
      -F "username=$USERNAME" \
      "$WEBHOOK"
  '';
};

Mara is hacker

Mara

This service is a oneshot unit, meaning systemd will launch this once and not expect it to always stay running.

Now let's create a timer for this service. We need to do the following:

  • Associate the timer with that service
  • Assign a schedule to the timer

Add this to the end of your configuration.nix:

systemd.timers.acnh-island-check-reminder = {
  wantedBy = [ "timers.target" ];
  partOf = [ "acnh-island-check-reminder.service" ];
  timerConfig.OnCalendar = "TODO(Xe): this";
};

Before we mentioned that we want to trigger this reminder every morning at 9 am. systemd timers specify their calendar config in the following format:

DayOfWeek Year-Month-Day Hour:Minute:Second

So for something that triggers every day at 9 AM, it would look like this:

*-*-* 8:00:00

Mara is hacker

Mara

You can ignore the day of the week if it's not relevant!

So our final timer definition would look like this:

systemd.timers.acnh-island-check-reminder = {
  wantedBy = [ "timers.target" ];
  partOf = [ "acnh-island-check-reminder.service" ];
  timerConfig.OnCalendar = "*-*-* 8:00:00";
};

Deployment and Testing

Now we can deploy this with nixos-rebuild:

$ sudo nixos-rebuild switch

You should see a line that says something like this in the nixos-rebuild output:

starting the following units: acnh-island-check-reminder.timer

Let's test the service out using systemctl:

$ sudo systemctl start acnh-island-check-reminder.service

And you should then see a message on Discord. If you don't see a message, check the logs using journalctl:

$ journalctl -u acnh-island-check-reminder.service

If you see an error that looks like this:

curl: (26) Failed to open/read local data from file/application

This usually means that you tried to do a role or user mention at the beginning of the message and curl tried to interpret that as a file input. Add a word like "hey" at the beginning of the line to disable this behavior. See here for more information.


Also happy December! My site has the snow CSS loaded for the month. Enjoy!


Scavenger Hunt Solution

Permalink - Posted on 2020-11-25 00:00

Scavenger Hunt Solution

On November 22, I sent a tweet that contained the following text:

#467662 #207768 #7A7A6C #6B2061 #6F6C20 #6D7079 
#7A6120 #616C7A #612E20 #5A6C6C #206F61 #61773A 
#2F2F6A #6C6168 #6A6C68 #752E6A #736269 #2F6462 
#796675 #612E6E #747020 #6D7679 #207476 #796C20 
#70756D #767974 #686170 #76752E

This was actually the first part of a scavenger hunt/mini CTF that I had set up in order to see who went down the rabbit hole to solve it. I've had nearly a dozen people report back to me telling that they solved all of the puzzles and nearly all of them said they had a lot of fun. Here's how to solve each of the layers of the solution and how I created them.

Layer 1

The first layer was that encoded tweet. If you notice, everything in it is formatted as HTML color codes. HTML color codes just so happen to be encoded in hexadecimal. Looking at the codes you can see 20 come up a lot, which happens to be the hex-encoded symbol for the spacebar. So, let's turn this into a continuous hex string with s/#//g and s/ //g:

Mara is hacker

Mara

If you've seen a %20 in a URL before, that is the URL encoded form of the spacebar!

4676622077687A7A6C6B20616F6C206D7079
7A6120616C7A612E205A6C6C206F6161773A
2F2F6A6C61686A6C68752E6A7362692F6462
796675612E6E7470206D7679207476796C20
70756D76797468617076752E

And then turn it into an ASCII string:

Fvb whzzlk aol mpyza alza. Zll oaaw://jlahjlhu.jsbi/dbyfua.ntp mvy tvyl pumvythapvu.

Mara is hmm

Mara

Wait, what? this doesn't look like much of anything...wait, look at the oaaw://. Could that be http://?

Indeed it is my perceptive shark friend! Let's decode the rest of the string using the Caeser Cipher:

You passed the first test. See http://cetacean.club/wurynt.gmi for more information.

Now we're onto something!

Layer 2

Opening http://cetacean.club/wurynt.gmi we see the following:

wurynt

a father of modern computing,
rejected by his kin,
for an unintentional sin,
creator of a machine to break
the cipher that this message is encoded in

bq cr di ej kw mt os px uz gh

VI 1 1 I 17 1 III 12 1

qghja xmbzc fmqsb vcpzc zosah tmmho whyph lvnjj mpdkf gbsjl tnxqf ktqia mwogp eidny awoxj ggjqz mbrcm tkmyd fogzt sqkga udmbw nmkhp jppqs xerqq gdsle zfxmq yfdfj kuauk nefdc jkwrs cirut wevji pumqt hrxjr sfioj nbcrc nvxny vrphc r

Correction for the last bit

gilmb egdcr sowab igtyq pbzgv gmlsq udftc mzhqz exbmx zaxth isghc hukhc zlrrk cixhb isokt vftwy rfdyl qenxa nljca kyoej wnbpf uprgc igywv qzuud hrxzw gnhuz kclku hefzk xtdpk tfjzu byfyi sqmel gweou acwsi ptpwv drhor ahcqd kpzde lguqt wutvk nqprx gmiad dfdcm dpiwb twegt hjzdf vbkwa qskmf osjtk tcxle mkbnv iqdbe oejsx lgqc

Mara is hmm

Mara

Hmm, "a father of computing", "rejected by his kin", "an unintentional sin", "creator of a machine to break a cipher" could that mean Alan Turing? He made something to break the Enigma cipher and was rejected by the British government for being gay right?

Indeed. Let's punch these settings into an online enigma machine and see what we get:

congr adula tions forfi gurin goutt hisen igmao famys teryy ouhav egott enfar
thert hanan yonee lseha sbefo rehel pmebr eakfr eefol lowth ewhit erabb ittom
araht tpyvz vgjiu ztkhf uhvjq roybx dswzz caiaq kgesk hutvx iplwa donio n

httpc olons lashs lashw hyvec torze dgamm ajayi ndigo ultra zedfi vetan gokil
ohalo fineu ltrah alove ctorj ayqui etrho omega yotta betax raysi xdonu tseve
nsupe rwhyz edzed canad aasia indig oasia twoqu ietki logam maeps ilons uperk
iloha loult rafou rtang ovect orsev ensix xrayi ndigo place limaw hyasi adelt
adoto nion

And here is where I messed up with this challenge. Enigma doesn't handle numbers. It was designed to encode the 26 letters of the Latin alphabet. If you look at the last bit of the output you can see onio n and o nion. This points you to a Tor hidden service, but because I messed this up the two hints point you at slightly wrong onion addresses (tor hidden service addresses usually have numbers in them). Once I realized this, I made a correction that just gives away the solution so people could move on to the next step.

Onwards to http://yvzvgjiuz5tkhfuhvjqroybx6d7swzzcaia2qkgeskhu4tv76xiplwad.onion/!

Layer 3

Open your tor browser and punch in the onion URL. You should get a page that looks like this:

Mara's Realm

This shows some confusing combinations of letters and some hexadecimal text. We'll get back to the hexadecimal text in a moment, but let's take a closer look at the letters. There is a hint here to search the plover dictionary. Plover is a tool that allows hobbyists to learn stenography to type at the rate of human speech. My moonlander has a layer for typing out stenography strokes, so let's enable it and type them out:

Follow the white rabbit

Go to/test. w a s m

Which we can reinterpret as:

Follow the white rabbit

Go to /test.wasm

Mara is hacker

Mara

The joke here is that many people seem to get stenography and steganography confused, so that's why there's stenography in this steganography challenge!

Going to /test.wasm we get a WebAssembly download. I've uploaded a copy to my blog's CDN here.

Layer 4

Going back to that hexadecimal text from above, we see that it says this:

go get tulpa.dev/cadey/hlang

This points to the source repo of hlang, which is a satirical "programming language" that can only print the letter h (or the lojbanic h ' for that sweet sweet internationalisation cred). Something odd about hlang is that it uses WebAssembly to execute all programs written in it (this helps it reach its "no sandboxing required" and "zero* dependencies" goals).

Let's decompile this WebAssembly file with wasm2wat

$ wasm2wat /data/test.wasm
<output too big, see https://git.io/Jkyli>

Looking at the decompilation we can see that it imports a host function h.h as the hlang documentation suggests and then constantly calls it a bunch of times:

(module
  (type (;0;) (func (param i32)))
  (type (;1;) (func))
  (import "h" "h" (func (;0;) (type 0)))
  (func (;1;) (type 1)
    i32.const 121
    call 0
    i32.const 111
    call 0
    i32.const 117
    call 0
  ; ...

There's a lot of 32 in the output. 32 is the base 10 version of 0x20, which is the space character in ASCII. Let's try to reformat the numbers to ascii characters and see what we get:

you made it, this is the end of the line however. writing all of this up takes a lot of time. if you made it this far, email me@christine.website to get your name entered into the hall of heroes. be well.

How I Implemented This

Each layer was designed independently and then I started building them together later.

One of the first steps was to create the website for Mara's Realm. I started by writing out all of the prose into a file called index.md and then I ran sw using Pandoc for markdown conversion.

Then I created the WebAssembly binary by locally hacking a copy of hlang to allow arbitrary strings. I stuck it in the source directory for the website and told sw to not try and render it as markdown.

Once I had the HTML source, I copied it to a machine on my network at /srv/http/marahunt using this command:

$ rsync \
    -avz \
    site.static/ \
    root@192.168.0.127:/srv/http/marahunt

And then I created a tor hidden service using the services.tor.hiddenServices options:

services.tor = {
  enable = true;

  hiddenServices = {
    "hunt" = {
      name = "hunt";
      version = 3;
      map = [{
        port = 80;
        toPort = 80;
      }];
    };
  };
};

Once I pushed this config to that server, I grabbed the hostname from /var/lib/tor/onion/hunt/hostname and set up an nginx virtualhost:

services.nginx = {
  virtualHosts."yvzvgjiuz5tkhfuhvjqroybx6d7swzzcaia2qkgeskhu4tv76xiplwad.onion" =
    {
      root = "/srv/http/marahunt";
    };
};

And then I pushed the config again and tested it with curl:

$ curl -H "Host: yvzvgjiuz5tkhfuhvjqroybx6d7swzzcaia2qkgeskhu4tv76xiplwad.onion" http://127.0.0.1 | grep title
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  3043  100  3043    0     0  2971k      0 --:--:-- --:--:-- --:--:-- 2971k
<title>Mara's Realm</title>
.headerSubtitle { font-size: 0.6em; font-weight: normal; margin-left: 1em; }
<a href="index.html">Mara's Realm</a> <span class="headerSubtitle">sh0rk in the cloud</span>

Once I was satisfied with the HTML, I opened up an enigma encoder and started writing out the message congradulating the user for figuring out "this enigma of a mystery". I also included the onion URL (with the above mistake) in that message.

Then I started writing the wurynt page on my gemini server. wurynt was coined by blindly pressing 6 keys on my keyboard. I added a little poem about Alan Turing to give a hint that this was an enigma cipher and then copied the Enigma settings on the page just in case. It turned out that I was using the default settings for the Cryptee Enigma simulator, so this was not needed; however it was probably better to include them regardless.

This is where I messed up as I mentioned earlier. Once I realized my mistake in trying to encode the onion address twice, I decided it would be best to just give away the answer on the page, so I added the correct onion URL to the end of the enigma message so that it wouldn't break flow for people.

The final part was to write and encode the message that I would tweet out. I opened a scratch buffer and wrote out the "You passed the first test" line and then encoded it using the ceasar cipher and encoded the result of that into hex. After a lot of rejiggering and rewriting to make it have a multiple of 3 characters of text, I reformatted it as HTML color codes and tweeted it without context.

Feedback I Got

Some of the emails and twitter DM's I got had some useful and amusing feedback. Here's some of my favorites:

my favourite part was the opportunity to go down different various rabbit holes (I got to learn about stenography and WASM, which I'd never looked into!)

I want to sleep. It's 2 AM here, but a friend sent me the link an hour ago and I'm a cat, so the curiosity killed me.

That was a fun little game. Thanks for putting it together.

oh noooo this is going to nerd snipe me

I'm amused that you left the online enigma emulator on default settings.

I swear to god I'm gonna beach your orca ass

Improvements For Next Time

Next time I'd like to try and branch out from just using ascii. I'd like to throw other encodings into the game (maybe even have a stage written in EBCDIC formatted Esperanto or something crazy like that). I was also considering having some public/private key crypto in the mix to stretch people's skillsets.

Something I will definitely do next time is make sure that all of the layers are solveable. I really messed up with the enigma step and I had to unblock people by DMing them the answer. Always make sure your puzzles can be solved.

Hall of Heroes

(in no particular order)

  • Saphire Lattice
  • Open Skies
  • Tralomine
  • AstroSnail
  • Dominika
  • pbardera
  • Max Hollman
  • Vojtěch
  • [object Object]
  • Bytewave

Thank you for solving this! I'm happy this turned out so successfully. More to come in the future.

🙂


How to Setup Prometheus, Grafana and Loki on NixOS

Permalink - Posted on 2020-11-20 00:00

How to Setup Prometheus, Grafana and Loki on NixOS

When setting up services on your home network, sometimes you have questions along the lines of "how do I know that things are working?". In this blogpost we will go over a few tools that you can use to monitor and visualize your machine state so you can answer that. Specifically we are going to use the following tools to do this:

Let's get going!

Mara is hacker

Mara

Something to note: in here you might see domains using the .pele top-level domain. This domain will likely not be available on your home network. See this series on how to set up something similar for your home network. If you don't have such a setup, replace anything that ends in .pele with whatever you normally use for this.

Grafana

Grafana is a service that handles graphing and alerting. It also has some nice tools to create dashboards. Here we will be using it for a few main purposes:

  • Exploring what metrics are available
  • Reading system logs
  • Making graphs and dashboards
  • Creating alerts over metrics or lack of metrics

Let's configure Grafana on a machine. Open that machine's configuration.nix in an editor and add the following to it:

# hosts/chrysalis/configuration.nix
{ config, pkgs, ... }: {
  # grafana configuration
  services.grafana = {
    enable = true;
    domain = "grafana.pele";
    port = 2342;
    addr = "127.0.0.1";
  };
  
  # nginx reverse proxy
  services.nginx.virtualHosts.${config.services.grafana.domain} = {
    locations."/" = {
        proxyPass = "http://127.0.0.1:${toString config.services.grafana.port}";
        proxyWebsockets = true;
    };
  };
}

Mara is hacker

Mara

If you have a custom TLS Certificate Authority, you can set up HTTPS for this deployment. See here for an example of doing this. If this server is exposed to the internet, you can use a certificate from Let's Encrypt instead of your own Certificate Authority.

Then you will need to deploy it to your cluster with nixops deploy:

$ nixops deploy -d home

Now open the Grafana server in your browser at http://grafana.pele and login with the super secure default credentials of admin/admin. Grafana will ask you to change your password. Please change it to something other than admin.

This is all of the setup we will do with Grafana for now. We will come back to it later.

Prometheus

Prometheus was punished by the gods by giving the gift of knowledge to man. He was cast into the bowels of the earth and pecked by birds. Oracle Turret, Portal 2

Prometheus is a service that reads metrics from other services, stores them and allows you to search and aggregate them. Let's add it to our configuration.nix file:

# hosts/chrysalis/configuration.nix
  services.prometheus = {
    enable = true;
    port = 9001;
  };

Now let's deploy this config to the cluster with nixops deploy:

$ nixops deploy -d home

And let's configure Grafana to read from Prometheus. Open Grafana and click on the gear to the left side of the page. The Data Sources tab should be active. If it is not active, click on Data Sources. Then click "add data source" and choose Prometheus. Set the URL to http://127.0.0.1:9001 (or with whatever port you configured above) and leave everything set to the default values. Click "Save & Test". If there is an error, be sure to check the port number.

The Grafana UI for adding a data source

Now let's start getting some data into Prometheus with the node exporter.

Node Exporter Setup

The Prometheus node exporter exposes a lot of information about systems ranging from memory, disk usage and even systemd service information. There are also some other collectors you can set up based on your individual setup, however we are going to enable only the node collector here.

In your configuration.nix, add an exporters block and configure the node exporter under services.prometheus:

# hosts/chrysalis/configuration.nix
  services.prometheus = {
    exporters = {
      node = {
        enable = true;
        enabledCollectors = [ "systemd" ];
        port = 9002;
      };
    };
  }

Now we need to configure Prometheus to read metrics from this exporter. In your configuration.nix, add a scrapeConfigs block under services.prometheus that points to the node exporter we configured just now:

# hosts/chrysalis/configuration.nix
  services.prometheus = {
    # ...
    
    scrapeConfigs = [
      {
        job_name = "chrysalis";
        static_configs = [{
          targets = [ "127.0.0.1:${toString config.services.prometheus.exporters.node.port}" ];
        }];
      }
    ];
    
    # ...
  }
  
  # ...

Mara is hacker

Mara

The complicated expression in the target above allows you to change the port of the node exporter and ensure that Prometheus will always be pointing at the right port!

Now we can deploy this to your cluster with nixops:

$ nixops deploy -d home

Open the Explore tab in Grafana and type in the following expression:

node_memory_MemFree_bytes

and hit shift-enter (or click the "Run Query" button in the upper left side of the screen). You should see a graph showing you the amount of ram that is free on the host, something like this:

A graph of the amount of system memory that is available on the host chrysalis

If you want to query other fields, you can type in node_ into the searchbox and autocomplete will show what is available. For a full list of what is available, open the node exporter metrics route in your browser and look through it.

Grafana Dashboards

Now that we have all of this information about our machine, let's create a little dashboard for it and set up a few alerts.

Click on the plus icon on the left side of the Grafana UI to create a new dashboard. It will look something like this:

An empty dashboard in Grafana

In Grafana terminology, everything you see in a dashboard is inside a panel. Let's create a new panel to keep track of memory usage for our server. Click "Add New Panel" and you will get a screen that looks like this:

A Grafana panel configuration screen

Let's make this keep track of free memory. Write "Memory Free" in the panel title field on the right. Write the following query in the textbox next to the dropdown labeled "Metrics":

node_memory_MemFree_bytes

and set the legend to {{job}}. You should get a graph that looks something like this:

A populated graph

This will show you how much memory is free on each machine you are monitoring with Prometheus' node exporter. Now let's configure an alert for the amount of free memory being low (where "low" means less than 64 megabytes of ram free).

Hit save in the upper right corner of the Grafana UI and give your dashboard a name, such as "Home Cluster Status". Now open the "Memory Free" panel for editing (click on the name and then click "Edit"), click the "Alert" tab, and click the "Create Alert" button. Let's configure it to do the following:

  • Check if free memory gets below 64 megabytes (64000000 bytes)
  • Send the message "Running out of memory!" when the alert fires

You can do that with a configuration like this:

The above configuration input to the Grafana UI

Save the changes to apply this config.

Mara is hmm

Mara

Wait a minute. Where will this alert go to?

It will only show up on the alerts page:

The alerts page with memory free alerts configured

But we can add a notification channel to customize this. Click on the Notification Channels tab and then click "New Channel". It should look something like this:

Notification Channel configuration

You can send notifications to many services, but let's send one to Discord this time. Acquire a Discord webhook link from somewhere and paste it in the Webhook URL field. Name it something like "Discord". It may also be a good idea to make this the default notification channel using the "Default" checkbox under the Notification Settings, so that our existing alert will show up in Discord when the system runs out of memory.

You can configure other alerts like this so you can monitor any other node metrics you want.

Mara is hacker

Mara

You can also monitor for the lack of data on particular metrics. If something that should always be reported suddenly isn't reported, it may be a good indicator that a server went down. You can also add other services to your scrapeConfigs settings so you can monitor things that expose metrics to Prometheus at /metrics.

Now that we have metrics configured, let's enable Loki for logging.

Loki

Loki is a log aggregator created by the people behind Grafana. Here we will use it as a target for all system logs. Unfortunately, the Loki NixOS module is very basic at the moment, so we will need to configure it with our own custom yaml file. Create a file in your configuration.nix folder called loki.yaml and copy in the config from this gist:

Then enable Loki with your config in your configuration.nix file:

# hosts/chrysalis/configuration.nix
  services.loki = {
    enable = true;
    configFile = ./loki-local-config.yaml;
  };

Promtail is a tool made by the Loki team that sends logs into Loki. Create a file called promtail.yaml in the same folder as configuration.nix with the following contents:

server:
  http_listen_port: 28183
  grpc_listen_port: 0

positions:
  filename: /tmp/positions.yaml

clients:
  - url: http://127.0.0.1:3100/loki/api/v1/push

scrape_configs:
  - job_name: journal
    journal:
      max_age: 12h
      labels:
        job: systemd-journal
        host: chrysalis
    relabel_configs:
      - source_labels: ['__journal__systemd_unit']
        target_label: 'unit'

Now we can add promtail to your configuration.nix by creating a systemd service to run it with this snippet:

# hosts/chrysalis/configuration.nix
  systemd.services.promtail = {
    description = "Promtail service for Loki";
    wantedBy = [ "multi-user.target" ];

    serviceConfig = {
      ExecStart = ''
        ${pkgs.grafana-loki}/bin/promtail --config.file ${./promtail.yaml}
      '';
    };
  };

Now that you have this all set up, you can push this to your cluster with nixops:

$ nixops deploy -d home

Once that finishes, open up Grafana and configure a new Loki data source with the URL http://127.0.0.1:3100:

Loki Data Source configuration

Now that you have Loki set up, let's query it! Open the Explore view in Grafana again, choose Loki as the source, and enter in the query {job="systemd-journal"}:

Loki search

Mara is hacker

Mara

You can also add Loki queries like this to dashboards! Loki also lets you query by systemd unit with the unit field. If you wanted to search for logs from foo.service, you would need a query that looks something like {job="systemd-journal", unit="foo.service"} You can do many more complicated things with Loki. Look here for more information on what you can query. As of the time of writing this blogpost, you are currently unable to make Grafana alerts based on Loki queries as far as I am aware.


This barely scrapes the surface of what you can accomplish with a setup like this. Using more fancy setups you can alert on the rate of metrics changing. I plan to make NixOS modules to make this setup easier in the future. There is also a set of options in services.grafana.provision that can make it easier to automagically set up Grafana with per-host dashboards, alerts and all of the data sources that are outlined in this post.

The setup in this post is quite meager, but it should be enough to get you started with whatever you need to monitor. Adding Prometheus metrics to your services will go a long way in terms of being able to better monitor things in production, do not be afraid to experiment!


Various Updates

Permalink - Posted on 2020-11-18 00:00

Various Updates

Immigration purgatory is an experience. It's got a lot of waiting and there is a lot of uncertainty that can make it feel stressful. Like I said before, I'm not concerned; however I have a lot of free time on my hands and I've been using it to make some plans for the blog (and a new offering for companies that need help dealing with the new Docker Hub rate limits) in the future. I'm gonna outline them below in their own sections. This blogpost was originally about 4 separate blogposts that I started and abandoned because I had trouble focusing on finishing them. Stress sucks lol.

WebMention Support

I recently deployed mi v1.0.0 to my home cluster. mi is a service that handles a lot of personal API tasks including the automatic post notifications to Twitter and Mastodon. The old implementation was in Go and stored its data in RethinkDB. I also have a snazzy frontend in Elm for mi. This new version is rewritten from scratch to use Rust, Rocket and SQLite. It is also fully nixified and is deployed to my home cluster via a NixOS module.

One of the major new features I have in this rewrite is WebMention support. WebMentions allow compatible websites to "mention" my articles or other pages on my main domains by sending a specially formatted HTTP request to mi. I am still in the early stages of integrating mi into my site code, but eventually I hope to have a list of places that articles are mentioned in each post. The WebMention endpoint for my site is https://mi.within.website/api/webmention/accept. I have added WebMention metadata into the HTML source of the blog pages as well as in the Link header as the W3 spec demands.

If you encounter any issues with this feature, please let me know so I can get it fixed as soon as possible.

Thoughts on Elm as Used in mi

Elm is an interesting language for making single page applications. The old version of mi was the first time I had really ever used Elm for anything serious and after some research I settled on using elm-spa as a framework to smooth over some of the weirder parts of the language. elm-spa worked great at first. All of the pages were separated out into their own components and the routing setup was really intuitive (if a bit weird because of the magic involved). It's worked great for a few years and has been very low maintenance.

However when I was starting to implement the backend of mi in Rust, I tried to nixify the elm-spa frontend I made. This was a disaster. The magic that elm-spa relied on fell apart and at the time I attempted to do this it was very difficult to do this.

As a result I ended up rewriting the frontend in very very boring Elm using information from the Elm Guide and a lot of blogposts and help from the Elm slack. Overall this was a successful experiment and I can easily see this new frontend (which I have named sina as a compound toki pona pun) becoming a powerful tool for investigating and managing the data in mi.

Mara is hacker

Mara

Special thanks to malinoff, wolfadex, chadtech and mfeineis on the Elm slack for helping with the weird issues involved in getting a split model approach working.

Feel free to check out the code here. I may try to make an Elm frontend to my site for people that use the Progressive Web App support.

elm2nix

elm2nix is a very nice tool that lets you generate Nix definitions from Elm packages, however the template it uses is a bit out of date. To fix it you need to do the following:

$ elm2nix init > default.nix
$ elm2nix convert > elm-srcs.nix
$ elm2nix snapshot

Then open default.nix in your favorite text editor and change this:

      buildInputs = [ elmPackages.elm ]
        ++ lib.optional outputJavaScript nodePackages_10_x.uglify-js;

to this:

      buildInputs = [ elmPackages.elm ]
        ++ lib.optional outputJavaScript nodePackages.uglify-js;

and this:

            uglifyjs $out/${module}.${extension} --compress 'pure_funcs="F2,F3,F4,F5,F6,F7,F8,F9,A2,A3,A4,A5,A6,A7,A8,A9",pure_getters,keep_fargs=false,unsafe_comps,unsafe' \
                | uglifyjs --mangle --output=$out/${module}.min.${extension}

to this:

            uglifyjs $out/${module}.${extension} --compress 'pure_funcs="F2,F3,F4,F5,F6,F7,F8,F9,A2,A3,A4,A5,A6,A7,A8,A9",pure_getters,keep_fargs=false,unsafe_comps,unsafe' \
                | uglifyjs --mangle --output $out/${module}.min.${extension}

These issues should be fixed in the next release of elm2nix.

New Character in the Blog Cutouts

As I mentioned in the past, I am looking into developing out other characters for my blog. I am still in the early stages of designing this, but I think the next character in my blog is going to be an anthro snow leopard named Alicia. I want Alicia to be a beginner that is very new to computer programming and other topics, which would then make Mara into more of a teacher type. I may also introduce my own OC Cadey (the orca looking thing you can see here or in the favicon of my site) into the mix to reply to these questions in something more close to the Socratic method.

Some people have joked that the introduction of Mara turned my blog into a shark visual novel that teaches you things. This sounds hilarious to me, and I am looking into what it would take to make an actual visual novel on a page on my blog using Rust and WebAssembly. I am in very early planning stages for this, so don't expect this to come out any time soon.

Gergoplex Build

My Gergoplex kit finally came in yesterday, and I got to work soldering it up with some switches and applying the keycaps.

Me soldering the Gergoplex

A glory shot of the Gergoplex

I picked the Pro Red linear switches with a 35 gram spring in them (read: they need 35 grams of force to actuate, which is lighter than most switches) and typing on it is buttery smooth. The keycaps are a boring black, but they look nice on it.

Overall this kit (with the partial board, switches and keycaps) cost me about US$124 (not including shipping) with the costs looking something like this:

Name Count Cost
Gergoplex Partial Kit 1 $70
Choc Pro Red 35g switches 4 $10
Keycaps (15) 3 $30
Braided interconnect cable 1 $7
Mini-USB cable 1 $7

I'd say this was a worthwhile experience. I haven't really soldered anything since I was in high school and it was fun to pick up the iron again and make something useful. If you are looking for a beginner soldering project, I can't recommend the Gergoplex enough.

I also picked up some extra switches and keycaps (prices not listed here) for a future project involving an eInk display. More on that when it is time.

Branch Conventions

You may have noticed that some of my projects have default branches named main and others have default branches named mara. This difference is very intentional. Repos with the default branch main generally contain code that is "stable" and contains robust and reusable code. Repos with the default branch mara are generally my experimental repos and the code in them may not be the most reusable across other projects. mi is a repo with a mara default branch because it is a very experimental thing. In the future I may promote it up to having a main branch, however for now it's less effort to keep things the way it is.

Docker Consulting

The new Docker Hub rate limits have thrown a wrench into many CI/CD setups as well as uncertainty in how CI services will handle this. Many build pipelines implictly trust the Docker Hub to be up and that it will serve the appropriate image so that your build can work. Many organizations use their own Docker registry (GHCR, AWS/Google Cloud image registries, Artifactory, etc.), however most image build definitions I've seen start out with something like this:

FROM golang:alpine

which will implicitly pull from the Docker Hub. This can lead to bad things.

If you would like to have a call with me for examining your process for building Docker images in CI and get a list of actionable suggestions for how to work around this, contact me so that we can discuss pricing and scheduling.

I have been using Docker for my entire professional career (way back since Docker required you to recompile your kernel to enable cgroup support in public beta) and I can also discuss methods to make your Docker images as small as they can possibly get. My record smallest Docker image is 5 MB.

If either of these prospects interest you, please contact me so we can work something out.


Here's hoping that the immigration purgatory ends soon. I'm lucky enough to have enough cash built up that I can weather this jobless month. I've been using this time to work on personal projects (like mi and wasmcloud) and better myself. I've also done a little writing that I plan to release in the future after I clean it up.

In retrospect I probably should have done NaNoWriMo seeing that I basically will have the entire month of November jobless. I've had an idea for a while about someone that goes down the rabbit hole of mysticism and magick, but I may end up incorporating that into the visual novel project I mentioned in the Elm section.

Be well and stay safe out there. Wear a mask, stay at home.


Nixops Services on Your Home Network

Permalink - Posted on 2020-11-09 00:00

Nixops Services on Your Home Network

My homelab has a few NixOS machines. Right now they mostly run services inside Docker, because that has been what I have done for years. This works fine, but persistent state gets annoying*. NixOS has a tool called Nixops that allows you to push configurations to remote machines. I use this for managing my fleet of machines, and today I'm going to show you how to create service deployments with Nixops and push them to your servers.

Mara is hacker

Mara

Pedantically, Docker offers volumes to simplify this, but it is very easy to accidentally delete Docker volumes. Plain disk files like we are going to use today are a bit simpler than docker volumes, and thusly a bit harder to mess up.

Parts of a Service

For this example, let's deploy a chatbot. To make things easier, let's assume the following about this chatbot:

  • The chatbot has a git repo somewhere
  • The chatbot's git repo has a default.nix that builds the service and includes any supporting files it might need
  • The chatbot reads its configuration from environment variables which may contain secret values (API keys, etc.)
  • The chatbot stores any temporary files in its current working directory
  • The chatbot is "well-behaved" (for some definition of "well-behaved")

I will also need to assume that you have a git repo (or at least a folder) with all of your configuration similar to mine.

For this example I'm going to use withinbot as the service we will deploy via Nixops. withinbot is a chatbot that I use on my own Discord guild that does a number of vital functions including supplying amusing facts about printers:

     <Cadey~> ~printerfact
<Within[BOT]> @Cadey~ Printers, especially older printers, do get cancer. Many
              times this disease can be treated successfully

Mara is hacker

Mara

To get your own amusing facts about printers, see here or for using its API, call /fact. This API has no practical rate limits, but please don't test that.

Service Definition

We will need to do a few major things for defining this service:

  1. Add the bot code as a package
  2. Create a "services" folder for the service modules
  3. Create a user account for the service
  4. Set up a systemd unit for the service
  5. Configure the secrets using Nixops keys

Add the Code as a Package

In order for the program to be installed to the remote system, you need to tell the system how to import it. There's many ways to do this, but the cheezy way is to add the packages to nixpkgs.config.packageOverrides like this:

nixpkgs.config = {
  packageOverrides = pkgs: {
    within = {
      withinbot = import (builtins.fetchTarball 
        "https://github.com/Xe/withinbot/archive/main.tar.gz") { };
    };
  };
};

And now we can access it as pkgs.within.withinbot in the rest of our config.

Mara is hacker

Mara

In production circumstances you should probably use a fetcher that locks to a specific version using unique URLs and hashing, but this will work enough to get us off the ground in this example.

Create a "services" Folder

In your configuration folder, create a folder that you will use for these service definitions. I made mine in common/services. In that folder, create a default.nix with the following contents:

{ config, lib, ... }:

{
  imports = [ ./withinbot.nix ];

  users.groups.within = {};
}

The group listed here is optional, but I find that having a group like that can help you better share resources and files between services.

Now we need a folder for storing secrets. Let's create that under the services folder:

$ mkdir secrets

And let's also add a gitignore file so that we don't accidentally commit these secrets to the repo:

# common/services/secrets/.gitignore
*

Now we can put any secrets we want in the secrets folder without the risk of committing them to the git repo.

Service Manifest

Let's create withinbot.nix and set it up:

{ config, lib, pkgs, ... }:
with lib; {
  options.within.services.withinbot.enable =
    mkEnableOption "Activates Withinbot (the furryhole chatbot)";

  config = mkIf config.within.services.withinbot.enable {
    
  };
}

This sets up an option called within.services.withinbot.enable which will only add the service configuration if that option is set to true. This will allow us to define a lot of services that are available, but none of their config will be active unless they are explicitly enabled.

Now, let's create a user account for the service:

# ...
  config = ... {
    users.users.withinbot = {
      createHome = true;
      description = "github.com/Xe/withinbot";
      isSystemUser = true;
      group = "within";
      home = "/srv/within/withinbot";
      extraGroups = [ "keys" ];
    };
  };
# ...

This will create a user named withinbot with the home directory /srv/within/withinbot, the group within and also in the group keys so the withinbot user can read deployment secrets.

Now let's add the deployment secrets to the configuration:

# ...
  config = ... {
    users.users.withinbot = { ... };
    
    deployment.keys.withinbot = {
      text = builtins.readFile ./secrets/withinbot.env;
      user = "withinbot";
      group = "within";
      permissions = "0640";
    };
  };
# ...

Assuming you have the configuration at ./secrets/withinbot.env, this will register the secrets into /run/keys/withinbot and also create a systemd oneshot service named withinbot-key. This allows you to add the secret's existence as a condition for withinbot to run. However, Nixops puts these keys in /run, which by default is mounted using a temporary memory-only filesystem, meaning these keys will need to be re-added to machines when they are rebooted. Fortunately, nixops reboot will automatically add the keys back after the reboot succeeds.

Now that we have everything else we need, let's add the service configuration:

# ...
  config = ... {
    users.users.withinbot = { ... };
    deployment.keys.withinbot = { ... };
    
    systemd.services.withinbot = {
      wantedBy = [ "multi-user.target" ];
      after = [ "withinbot-key.service" ];
      wants = [ "withinbot-key.service" ];

      serviceConfig = {
        User = "withinbot";
        Group = "within";
        Restart = "on-failure"; # automatically restart the bot when it dies
        WorkingDirectory = "/srv/within/withinbot";
        RestartSec = "30s";
      };

      script = let withinbot = pkgs.within.withinbot;
      in ''
        # load the environment variables from /run/keys/withinbot
        export $(grep -v '^#' /run/keys/withinbot | xargs)
        # service-specific configuration
        export CAMPAIGN_FOLDER=${withinbot}/campaigns
        # kick off the chatbot
        exec ${withinbot}/bin/withinbot
      '';
    };
  };
# ...

This will create the systemd configuration for the service so that it starts on boot, waits to start until the secrets have been loaded into it, runs withinbot as its own user and in the within group, and throttles the service restart so that it doesn't incur Discord rate limits as easily. This will also put all withinbot logs in journald, meaning that you can manage and monitor this service like you would any other systemd service.

Deploying the Service

In your target server's configuration.nix file, add an import of your services directory:

{
  # ...
  imports = [
    # ...
    /home/cadey/code/nixos-configs/common/services
  ];
  # ...
}

And then enable the withinbot service:

{
  # ...
  within.services = {
    withinbot.enable = true;
  };
  # ...
}

Mara is hacker

Mara

Make that a block so you can enable multiple services at once like this!

Now you are free to deploy it to your network with nixops deploy:

$ nixops deploy -d hexagone

And then you can verify the service is up with systemctl status:

$ nixops ssh -d hexagone chrysalis -- systemctl status withinbot
● withinbot.service
     Loaded: loaded (/nix/store/7ab7jzycpcci4f5wjwhjx3al7xy85ka7-unit-withinbot.service/withinbot.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2020-11-09 09:51:51 EST; 2h 29min ago
   Main PID: 12295 (withinbot)
         IP: 0B in, 0B out
      Tasks: 13 (limit: 4915)
     Memory: 7.9M
        CPU: 4.456s
     CGroup: /system.slice/withinbot.service
             └─12295 /nix/store/qpq281hcb1grh4k5fm6ksky6w0981arp-withinbot-0.1.0/bin/withinbot

Nov 09 09:51:51 chrysalis systemd[1]: Started withinbot.service.

This basic template is enough to expand out to anything you would need and is what I am using for my own network. This should be generic enough for most of your needs. Check out the NixOS manual for more examples and things you can do with this. The Nixops manual is also a good read. It can also set up deployments with VirtualBox, libvirtd, AWS, Digital Ocean, and even Google Cloud.

The cloud is the limit! Be well.


ZSA Moonlander Review

Permalink - Posted on 2020-11-06 00:00

ZSA Moonlander Review

I am nowhere near qualified to review things objectively. Therefore this blogpost will mostly be about what I like about this keyboard. I plan to go into a fair bit of detail, however please do keep in mind that this is subjective as all hell. Also keep in mind that this is partially also going to be a review of my own keyboard layout too. I'm going to tackle this in a few parts that I will label with headings.

This review is NOT sponsored. I paid for this device with my own money. I have no influence pushing me either way on this keyboard.

a picture of the keyboard on my desk

Mara is hacker

Mara

That 3d printed brain is built from the 3D model that was made as a part of this blogpost.

tl;dr

I like the Moonlander. It gets out of my way and lets me focus on writing and code. I don't like how limited the Oryx configurator is, but the fact that I can build my own firmware from source and flash it to the keyboard on my own makes up for that. I think this was a purchase well worth making, but I can understand why others would disagree. I can easily see this device becoming a core part of my workflow for years to come.

Build Quality

The Moonlander is a solid keyboard. Once you set it up with the tenting legs and adjust the key cluster, the keyboard is rock solid. The only give I've noticed is because my desk mat is made of a rubber-like material. The construction of the keyboard is all plastic but there isn't any deck flex that I can tell. Compare this to cheaper laptops where the entire keyboard bends if you so much as touch the keys too hard.

The palmrests are detachable and when they are off it gives the keyboard a space-age vibe to it:

the left half of the keyboard without the palmrest attached

The palmrests feel very solid and fold up into the back of the keyboard for travel. However folding up the palmrest does mess up the tenting stability, so you can't fold in the palmrest and type very comfortably. This makes sense though, the palmrest is made out of smooth plastic so it feels nicer on the hands.

ZSA said that iPad compatibility is not guaranteed due to the fact that the iPad might not put out enough juice to run it, however in my testing with an iPad Pro 2018 (12", 512 GB storage) it works fine. The battery drains a little faster, but the Moonlander is a much more active keyboard than the smart keyboard so I can forgive this.

Switches

I've been using mechanical keyboards for years, but most of them have been clicky switches (such as cloned Cherry MX blues, actual legit Cherry MX blues and the awful Razer Green switches). This is my first real experience with Cherry MX brown switches. There are many other options when you are about to order a moonlander, but I figured Cherry MX browns would be a nice neutral choice.

The keyswitches are hot-swappable (no disassembly or soldering required), and changing out keyswitches DOES NOT void your warranty. I plan to look into Holy Pandas and Zilents V2 in the future. There is even a clever little tool in the box that makes it easy to change out keyswitches.

Overall, this has been one of the best typing experiences I have ever had. The noise is a little louder than I would have liked (please note that I tend to bottom out the keycaps as I type, so this may end up factoring into the noise I experience); but overall I really like it. It is far better than I have ever had with clicky switches.

Typing Feel

The Moonlander uses an ortholinear layout as opposed to the staggered layout that you find on most keyboards. This took some getting used to, but I have found that it is incredibly comfortable and natural to write on.

My Keymap

Each side of the keyboard has the following:

  • 20 alphanumeric keys (some are used for ;, ,, . and / like normal keyboards)
  • 12 freely assignable keys (useful for layer changes, arrow keys, symbols and modifiers)
  • 4 thumb keys

In total, this keyboard has 72 keys, making it about a 70% keyboard (assuming the math in my head is right).

My keymap uses all but two of these keys. The two keys I haven't figured out how to best use yet are the ones that I currently have the [ and ] keycaps on. Right now they are mapped to the left and right arrow keys. This was the default.

My keymap is organized into layers. In each of these subsections I will go into detail about what these layers are, what they do and how they help me. My keymap code is here and I have a limited view of it embedded below:

If you want to flash my layout to your Moonlander for some reason, you can find the firmware binary here. You can then flash this to your keyboard with Wally.

Base Layers

I have a few base layers that contain the main set of letters and numbers that I type. The main base layer is my Colemak layer. I have the keys arranged to a standard Colemak layout and it is currently the layer I type the fastest on. I have the RGB configured so that it is mostly pink with the homerow using a lighter shade of pink. The color codes come from my logo that you can see in the favicon or here for a larger version.

I also have a qwerty layer for gaming. Most games expect qwerty keyboards and this is an excellent stopgap to avoid having to rebind every game that I want to play. The left side of the keyboard is the active one with the controller board in it too, so I can unplug the other half of the keyboard and give my mouse a lot of room to roam.

Thanks to a friend of mine, I am also playing with Dvorak. I have not gotten far in Dvorak yet, but it is interesting to play with.

I'll cover the leader key in the section below dedicated to it, but the other major thing that I have is a colon key on my right hand thumb cluster. This has been a huge boon for programming. The colon key is typed a lot. Having it on the thumb cluster means that I can just reach down and hit it when I need to. This makes writing code in Go and Rust so much easier.

Symbol/Number Layer

If you look at the base layer keymap, you will see that I do not have square brackets mapped anywhere there. Yet I write code with it effortlessly. This is because of the symbol/number layer that I access with the lower right and lower left keys on the keyboard. I have it positioned there so I can roll my hand to the side and then unlock the symbols there. I have access to every major symbol needed for programming save < and > (which I can easily access on the base layer with the shift key). I also get a nav cluster and a number pad.

I also have dynamic macros on this layer which function kinda like vim macros. The only difference is that there's only two macros instead of many like vim. They are convenient though.

Media Layer

One of the cooler parts of the Moonlander is that it can act as a mouse. It is a very terrible mouse (understandably, mostly because the digital inputs of keypresses cannot match the analog precision of a mouse). This layer has an arrow key cluster too. I normally use the arrow keys along the bottom of the keyboard with my thumbs, but sometimes it can help to have a dedicated inverse T arrow cluster for things like old MS-DOS games.

I also have media control keys here. They aren't the most useful on my linux desktop, however when I plug it into my iPad they are amazing.

dwm Layer

I use dwm as my main window manager in Linux. dwm is entirely controlled using the keyboard. I have a dedicated keyboard layer to control dwm and send out its keyboard shortcuts. It's really nice and lets me get all of the advantages of my tiling setup without needing to hit weird keycombos.

Leader Macros

Leader macros are one of the killer features of my layout. I have a huge bank of them and use them to do type out things that I type a lot. Most common git and Kubernetes commands are just a leader macro away.

The Go if err != nil macro that got me on /r/programmingcirclejerk twice is one of my leader macros, but I may end up promoting it to its own key if I keep getting so much use out of it (maybe one of the keys I don't use can become my if err != nil key). I'm sad that the threads got deleted (I love it when my content gets on there, it's one of my favorite subreddits), but such is life.

NixOS, the Moonlander and Colemak

When I got this keyboard, flashed the firmware and plugged it in, I noticed that my keyboard was sending weird inputs. It was rendering things that look like this:

The quick brown fox jumps over the lazy yellow dog.

into this:

Ghf qluce bpywk tyx nlm;r yvfp ghf iazj jfiiyw syd.

This is because I had configured my NixOS install to interpret the keyboard as if it was Colemak. However the keyboard is able to lie and sends out normal keycodes (even though I am typing them in Colemak) as if I was typing in qwerty. This double Colemak meant that a lot of messages and commands were completely unintelligible until I popped into my qwerty layer.

I quickly found the culprit in my config:

console.useXkbConfig = true;
services.xserver = {
  layout = "us";
  xkbVariant = "colemak";
  xkbOptions = "caps:escape";
};

This config told the X server to always interpret my keyboard as if it was Colemak, meaning that I needed to tell it not to. As a stopgap I commented this section of my config out and rebuilt my system.

X11 allows you to specify keyboard configuration for keyboards individually by device product/vendor names. The easiest way I know to get this information is to open a terminal, run dmesg -w to get a constant stream of kernel logs, unplug and plug the keyboard back in and see what the kernel reports:

[242718.024229] usb 1-2: USB disconnect, device number 8
[242948.272824] usb 1-2: new full-speed USB device number 9 using xhci_hcd
[242948.420895] usb 1-2: New USB device found, idVendor=3297, idProduct=1969, bcdDevice= 0.01
[242948.420896] usb 1-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[242948.420897] usb 1-2: Product: Moonlander Mark I
[242948.420898] usb 1-2: Manufacturer: ZSA Technology Labs
[242948.420898] usb 1-2: SerialNumber: 0

The product is named Moonlander Mark I, which means we can match for it and tell X11 to not colemakify the keycodes using something like this:

Section "InputClass"
  Identifier "moonlander"
  MatchIsKeyboard "on"
  MatchProduct "Moonlander"
  Option "XkbLayout" "us"
  Option "XkbVariant" "basic"
EndSection

Mara is hacker

Mara

For more information on what you can do in an InputClass section, see here in the X11 documentation.

This configuration fragment can easily go in the normal X11 configuration folder, but doing it like this would mean that I would have to manually drop this file in on every system I want to colemakify. This does not scale and defeats the point of doing this in NixOS.

Thankfully NixOS has an option to solve this very problem. Using this module we can write something like this:

services.xserver = {
  layout = "us";
  xkbVariant = "colemak";
  xkbOptions = "caps:escape";

  inputClassSections = [
    ''
      Identifier "yubikey"
      MatchIsKeyboard "on"
      MatchProduct "Yubikey"
      Option "XkbLayout" "us"
      Option "XkbVariant" "basic"
    ''
    ''
      Identifier "moonlander"
      MatchIsKeyboard "on"
      MatchProduct "Moonlander"
      Option "XkbLayout" "us"
      Option "XkbVariant" "basic"
    ''
  ];
};

But this is NixOS and that allows us to go one step further and make the identifier and product matching string configurable as will with our own NixOS options. Let's start by lifting all of that above config into its own module:

# Colemak.nix

{ config, lib, ... }: with lib; {
  options = {
    cadey.colemak = {
      enable = mkEnableOption "Enables colemak for the default X config";
    };
  };
  
  config = mkIf config.cadey.Colemak.enable {
    services.xserver = {
      layout = "us";
      xkbVariant = "colemak";
      xkbOptions = "caps:escape";

      inputClassSections = [
        ''
          Identifier "yubikey"
          MatchIsKeyboard "on"
          MatchProduct "Yubikey"
          Option "XkbLayout" "us"
          Option "XkbVariant" "basic"

        ''
        ''
          Identifier "moonlander"
          MatchIsKeyboard "on"
          MatchProduct "Moonlander"
          Option "XkbLayout" "us"
          Option "XkbVariant" "basic"
        ''
      ];
    };
  };
}

Mara is hacker

Mara

This also has Yubikey inputs not get processed into Colemak so that Yubikey OTPs still work as expected. Keep in mind that a Yubikey in this mode pretends to be a keyboard, so without this configuration the OTP will be processed into Colemak. The Yubico verification service will not be able to understand OTPs that are typed out in Colemak.

Then we can turn the identifier and product values into options with mkOption and string interpolation:

# ...
    cadey.colemak = {
      enable = mkEnableOption "Enables Colemak for the default X config";
      ignore = {
        identifier = mkOption {
          type = types.str;
          description = "Keyboard input identifier to send raw keycodes for";
          default = "moonlander";
        };
        product = mkOption {
          type = types.str;
          description = "Keyboard input product to send raw keycodes for";
          default = "Moonlander";
        };
      };
    };
# ...
        ''
          Identifier "${config.cadey.colemak.ignore.identifier}"
          MatchIsKeyboard "on"
          MatchProduct "${config.cadey.colemak.ignore.product}"
          Option "XkbLayout" "us"
        ''
# ...

Adding this to the default load path and enabling it with cadey.colemak.enable = true; in my tower's configuration.nix

This section was made possible thanks to help from Graham Christensen who seems to be in search of a job. If you are wanting someone on your team that is kind and more than willing to help make your team flourish, I highly suggest looking into putting him in your hiring pipeline. See here for contact information.

Oryx

Oryx is the configurator that ZSA created to allow people to create keymaps without needing to compile your own firmware or install the QMK toolchain.

Mara is hacker

Mara

QMK is the name of the firmware that the Moonlander (and a lot of other custom/split mechanical keyboards) use. It works on AVR and Arm processors.

For most people, Oryx should be sufficient. I actually started my keymap using Oryx and sorta outgrew it as I learned more about QMK. It would be nice if Oryx added leader key support, however this is more of an advanced feature so I understand why it doesn't have that.

Things I Don't Like

This keyboard isn't flawless, but it gets so many things right that this is mostly petty bickering at this point. I had to look hard to find these.

I would have liked having another thumb key for things like layer toggling. I can make do with what I have, but another key would have been nice. Maybe add a 1u key under the red shaped key?

At the point I ordered the Moonlander, I was unable to order a black keyboard with white keycaps. I am told that ZSA will be selling keycap sets as early as next year. When that happens I will be sure to order a white one so that I can have an orca vibe.

ZSA ships with UPS. Normally UPS is fine for me, but the driver that was slated to deliver it one day just didn't deliver it. I was able to get the keyboard eventually though. Contrary to their claims, the UPS website does NOT update instantly and is NOT the most up to date source of information about your package.

The cables aren't braided. I would have liked braided cables.

Like I said, these are really minor things, but it's all I can really come up with as far as downsides go.

Conclusion

Overall this keyboard is amazing. I would really suggest it to anyone that wants to be able to have control over their main tool and craft it towards their desires instead of making do with what some product manager somewhere decided what keys should do what. It's expensive at USD$350, but for the right kind of person this will be worth every penny. Your mileage may vary, but I like it.


Trisiel Progress: Rewritten in Rust

Permalink - Posted on 2020-10-31 00:00

Trisiel Progress: Rewritten in Rust

It's been a while since I had the last update for Trisiel. In that time I have gotten a lot done. As the title mentions I have completely rewritten Trisiel's entire stack in Rust. Part of the reason was for increased speed and the other part was to get better at Rust. I also wanted to experiment with running Rust in production and this has been an excellent way to do that.

Trisiel is going to have a few major parts:

  • The API (likely to be hosted at api.trisiel.com)
  • The Executor (likely to be hosted at run.trisiel.dev)
  • The Panel (likely to be hosted at panel.trisiel.com)
  • The command line tool trisiel
  • The Documentation site (likely to be hosted at docs.trisiel)

These parts will work together to implement a functions as a service platform.

Mara is hacker

Mara

The executor is on its own domain to prevent problems like this GitHub Pages vulnerability from 2013. It is on a .lgbt domain because LGBT rights are human rights.

I have also set up a landing page at trisiel.com and a twitter account at @trisielcloud. Right now these are placeholders. I wanted to register the domains before they were taken by anyone else.

Architecture

My previous attempt at Trisiel had more of a four tier webapp setup. The overall stack looked something like this:

  • Nginx in front of everything
  • The api server that did about everything
  • The executors that waited on message queues to run code and push results to the requester
  • Postgres
  • A message queue to communicate with the executors
  • IPFS to store WebAssembly modules

In simple testing, this works amazingly. The API server will send execution requests to the executors and everything will usually work out. However, the message queue I used was very "fire and forget" and had difficulties with multiple executors set up to listen on the queue. Additionally, the added indirection of needing to send the data around twice means that it would have difficulties scaling globally due to ingress and egress data costs. This model is solid and probably would have worked with some compression or other improvements like that, but overall I was not happy with it and decided to scrap it while I was porting the executor component to Rust. If you want to read the source code of this iteration of Trisiel, take a look here.

The new architecture of Trisiel looks something like this:

  • Nginx in front of everything
  • An API server that handles login with my gitea instance
  • The executor server that listens over https
  • Postgres
  • Backblaze B2 to store WebAssembly modules

The main change here is the fact that the executor listens over HTTPS, avoiding a lot of the overhead involved in running this on a message queue. It's also much simpler to implement and allows me to reuse a vast majority of the boilerplate that I developed for the Trisiel API server.

This new version of Trisiel is also built on top of Wasmer. Wasmer is a seriously fantastic library for this and getting up and running was absolutely trivial, even though I knew very little Rust when I was writing pa'i. I cannot recommend it enough if you ever want to execute WebAssembly on a server.

Roadmap

At this point, I can create new functions, upload them to the API server and then trigger them to be executed. The output of those functions is not returned to the user at this point. I am working on ways to implement that. There is also very little accounting for what resources and system calls are used, however it does keep track of execution time. The executor also needs to have the request body of the client be wired to the standard in of the underlying module, which will enable me to parse CGI replies from WebAssembly functions. This will allow you to host HTTP endpoints on Trisiel using the same code that powers this and this.

I also need to go in and completely refactor the olin crate and make the APIs much more ergonomic, not to mention make the HTTP client actually work again.

Then comes the documentation. Oh god there will be so much documentation. I will be drowning in documentation by the end of this.

I need to write the panel and command line tool for Trisiel. I want to write the panel in Elm and the command line tool in Rust.

There is basically zero validation for anything submitted to the Trisiel API. I will need to write validation in order to make it safer.

I may also explore enabling support for WASI in the future, but as I have stated before I do not believe that WASI works very well for the futuristic plan-9 inspired model I want to use on Trisiel.

Right now the executor shells out to pa'i, but I want to embed pa'i into the executor binary so there are fewer moving parts involved.

I also need to figure out what I should do with this project in general. It feels like it is close to being productizable, but I am in a very bad stage of my life to be able to jump in headfirst and build a company around this. Visa limitations also don't help here.

Things I Learned

Rocket is an absolutely fantastic web framework and I cannot recommend it enough. I am able to save so much time with Rocket and its slightly magic use of proc-macros. For an example, here is the entire source code of the /whoami route in the Trisiel API:

#[get("/whoami")]
#[instrument]
pub fn whoami(user: models::User) -> Json<models::User> {
    Json(user)
}

The FromRequest instance I have on my database user model allows me to inject the user associated with an API token purely based on the (validated against the database) claims associated with the JSON Web Token that the user uses for authentication. This then allows me to make API routes protected by simply putting the user model as an input to the handler function. It's magic and I love it.

Postgres lets you use triggers to automatically update updated_at fields for free. You just need a function that looks like this:

CREATE OR REPLACE FUNCTION trigger_set_timestamp()
  RETURNS TRIGGER AS $$
BEGIN
  NEW.updated_at = NOW();
  RETURN NEW;
END;
$$ LANGUAGE plpgsql;

And then you can make triggers for your tables like this:

CREATE TRIGGER set_timestamp_users
  BEFORE UPDATE ON users
  FOR EACH ROW
    EXECUTE PROCEDURE trigger_set_timestamp();

Every table in Trisiel uses this in order to make programming against the database easier.

The symbol/number layer on my Moonlander has been so good. It looks something like this:

And it makes using programming sigils so much easier. I don't have to stray far from the homerow to hit the most common ones. The only one that I still have to reach for is _, but I think I will bind that to the blank key under the ] key.

The best programming music is lofi hip hop radio - beats to study/relax to. Second best is Animal Crossing music. They both have this upbeat quality that makes the ideas melt into code and flow out of your hands.


Overall I'd say this is pretty good for a week of hacking while learning a new keyboard layout. I will do more in the future. I have plans. To read through the (admittedly kinda hacky/awful) code I've written this week, check out this git repo. If you have any feedback, please contact me. I will be happy to answer any questions.

As far as signups go, I am not accepting any signups at the moment. This is pre-alpha software. The abuse story will need to be figured out, but I am fairly sure it will end up being some kind of "pay or you can only run the precompiled example code in the documentation" with some kind of application process for the "free tier" of Trisiel. Of course, this is all theoretical and hinges on Trisiel actually being productizable; so who knows?

Be well.


Minicompiler: Lexing

Permalink - Posted on 2020-10-29 00:00

Minicompiler: Lexing

I've always wanted to make my own compiler. Compilers are an integral part of my day to day job and I use the fruits of them constantly. A while ago while I was browsing through the TempleOS source code I found MiniCompiler.HC in the ::/Demos/Lectures folder and I was a bit blown away. It implements a two phase compiler from simple math expressions to AMD64 bytecode (complete with bit-banging it to an array that the code later jumps to) and has a lot to teach about how compilers work. For those of you that don't have a TempleOS VM handy, here is a video of MiniCompiler.HC in action:

You put in a math expression, the compiler builds it and then spits out a bunch of assembly and runs it to return the result. In this series we are going to be creating an implementation of this compiler that targets WebAssembly. This compiler will be written in Rust and will use only the standard library for everything but the final bytecode compilation and execution phase. There is a lot going on here, so I expect this to be at least a three part series. The source code will be in Xe/minicompiler in case you want to read it in detail. Follow along and let's learn some Rust on the way!

Mara is hacker

Mara

Compilers for languages like C are built on top of the fundamentals here, but they are much more complicated.

Description of the Language

This language uses normal infix math expressions on whole numbers. Here are a few examples:

  • 2 + 2
  • 420 * 69
  • (34 + 23) / 38 - 42
  • (((34 + 21) / 5) - 12) * 348

Ideally we should be able to nest the parentheses as deep as we want without any issues.

Looking at these values we can notice a few patterns that will make parsing this a lot easier:

  • There seems to be only 4 major parts to this language:
    • numbers
    • math operators
    • open parentheses
    • close parentheses
  • All of the math operators act identically and take two arguments
  • Each program is one line long and ends at the end of the line

Let's turn this description into Rust code:

Bringing in Rust

Make a new project called minicompiler with a command that looks something like this:

$ cargo new minicompiler

This will create a folder called minicompiler and a file called src/main.rs. Open that file in your editor and copy the following into it:

// src/main.rs

/// Mathematical operations that our compiler can do.
#[derive(Debug, Eq, PartialEq)]
enum Op {
    Mul,
    Div,
    Add,
    Sub,
}

/// All of the possible tokens for the compiler, this limits the compiler
/// to simple math expressions.
#[derive(Debug, Eq, PartialEq)]
enum Token {
    EOF,
    Number(i32),
    Operation(Op),
    LeftParen,
    RightParen,
}

Mara is hacker

Mara

In compilers, "tokens" refer to the individual parts of the language you are working with. In this case every token represents every possible part of a program.

And then let's start a function that can turn a program string into a bunch of tokens:

// src/main.rs

fn lex(input: &str) -> Vec<Token> {
    todo!("implement this");
}

Mara is hmm

Mara

Wait, what do you do about bad input such as things that are not math expressions? Shouldn't this function be able to fail?

You're right! Let's make a little error type that represents bad input. For creativity's sake let's call it BadInput:

// src/main.rs

use std::error::Error;
use std::fmt;

/// The error that gets returned on bad input. This only tells the user that it's
/// wrong because debug information is out of scope here. Sorry.
#[derive(Debug, Eq, PartialEq)]
struct BadInput;

// Errors need to be displayable.
impl fmt::Display for BadInput {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        write!(f, "something in your input is bad, good luck")
    }
}

// The default Error implementation will do here.
impl Error for BadInput {}

And then let's adjust the type of lex() to compensate for this:

// src/main.rs

fn lex(input: &str) -> Result<Vec<Token>, BadInput> {
    todo!("implement this");
}

So now that we have the function type we want, let's start implementing lex() by setting up the result and a loop over the characters in the input string:

// src/main.rs

fn lex(input: &str) -> Result<Vec<Token>, BadInput> {
    let mut result: Vec<Token> = Vec::new();
    
    for character in input.chars() {
        todo!("implement this");
    }

    Ok(result)
}

Looking at the examples from earlier we can start writing some boilerplate to turn characters into tokens:

// src/main.rs

// ...

for character in input.chars() {
    match character {
        // Skip whitespace
        ' ' => continue,

        // Ending characters
        ';' | '\n' => {
            result.push(Token::EOF);
            break;
        }

        // Math operations
        '*' => result.push(Token::Operation(Op::Mul)),
        '/' => result.push(Token::Operation(Op::Div)),
        '+' => result.push(Token::Operation(Op::Add)),
        '-' => result.push(Token::Operation(Op::Sub)),

        // Parentheses
        '(' => result.push(Token::LeftParen),
        ')' => result.push(Token::RightParen),

        // Numbers
        '0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9' => {
            todo!("implement number parsing")
        }

        // Everything else is bad input
        _ => return Err(BadInput),
    }
}

// ...

Mara is hmm

Mara

Ugh, you're writing Token:: and Op:: a lot. Is there a way to simplify that?

Yes! enum variants can be shortened to their names with a use statement like this:

// src/main.rs

// ...

use Op::*;
use Token::*;

match character {
    // ...

    // Math operations
    '*' => result.push(Operation(Mul)),
    '/' => result.push(Operation(Div)),
    '+' => result.push(Operation(Add)),
    '-' => result.push(Operation(Sub)),

    // Parentheses
    '(' => result.push(LeftParen),
    ')' => result.push(RightParen),

    // ...
}
    
// ...

Which looks a lot better.

Mara is hacker

Mara

You can use the use statement just about anywhere in your program. However to keep things flowing nicer, the use statement is right next to where it is needed in these examples.

Now we can get into the fun that is parsing numbers. When he wrote MiniCompiler, Terry Davis used an approach that is something like this (spacing added for readability):

case '0'...'9':
  i = 0;
  do {
    i = i * 10 + *src - '0';
    src++;
  } while ('0' <= *src <= '9');
  *num=i;

This sets an intermediate variable i to 0 and then consumes characters from the input string as long as they are between '0' and '9'. As a neat side effect of the numbers being input in base 10, you can conceptualize 40 as (4 * 10) + 2. So it multiplies the old digit by 10 and then adds the new digit to the resulting number. Our setup doesn't let us get that fancy as easily, however we can emulate it with a bit of stack manipulation according to these rules:

  • If result is empty, push this number to result and continue lexing the program
  • Pop the last item in result and save it as last
  • If last is a number, multiply that number by 10 and add the current number to it
  • Otherwise push the node back into result and push the current number to result as well

Translating these rules to Rust, we get this:

// src/main.rs

// ...

// Numbers
'0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9' => {
    let num: i32 = (character as u8 - '0' as u8) as i32;
    if result.len() == 0 {
        result.push(Number(num));
        continue;
    }

    let last = result.pop().unwrap();

    match last {
        Number(i) => {
            result.push(Number((i * 10) + num));
        }
        _ => {
            result.push(last);
            result.push(Number(num));
        }
    }
}
            
// ...

Mara is hacker

Mara

This is not the most robust number parsing code in the world, however it will suffice for now. Extra credit if you can identify the edge cases!

This should cover the tokens for the language. Let's write some tests to be sure everything is working the way we think it is!

Testing

Rust has a robust testing framework built into the standard library. We can use it here to make sure we are generating tokens correctly. Let's add the following to the bottom of main.rs:

#[cfg(test)] // tells the compiler to only build this code when tests are being run
mod tests {
    use super::{Op::*, Token::*, *};

    // registers the following function as a test function
    #[test]
    fn basic_lexing() {
        assert!(lex("420 + 69").is_ok());
        assert!(lex("tacos are tasty").is_err());

        assert_eq!(
            lex("420 + 69"),
            Ok(vec![Number(420), Operation(Add), Number(69)])
        );
        assert_eq!(
            lex("(30 + 560) / 4"),
            Ok(vec![
                LeftParen,
                Number(30),
                Operation(Add),
                Number(560),
                RightParen,
                Operation(Div),
                Number(4)
            ])
        );
    }
}

This test can and probably should be expanded on, but when we run cargo test:

$ cargo test
   Compiling minicompiler v0.1.0 (/home/cadey/code/Xe/minicompiler)

    Finished test [unoptimized + debuginfo] target(s) in 0.22s
     Running target/debug/deps/minicompiler-03cad314858b0419

running 1 test
test tests::basic_lexing ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out

And hey presto! We verified that all of the parsing is working correctly. Those test cases should be sufficient to cover all of the functionality of the language.


This is it for part 1. We covered a lot today. Next time we are going to run a validation pass on the program, convert the infix expressions to reverse polish notation and then also get started on compiling that to WebAssembly. This has been fun so far and I hope you were able to learn from it.

Special thanks to the following people for reviewing this post:

  • Steven Weeks
  • sirpros
  • Leonora Tindall
  • Chetan Conikee
  • Pablo
  • boopstrap
  • ash2x3