What is a JSON feed? Learn more

JSON Feed Viewer

Browse through the showcased feeds, or enter a feed URL below.

Now supporting RSS and Atom feeds thanks to Andrew Chilton's feed2json.org service


Christine Dodrill's Blog

My blog posts and rants about various technology things.

A feed by Christine Dodrill


ln - The Natural Log Function

Permalink - Posted on 2020-10-17 00:00

ln - The Natural Log Function

One of the most essential things in software is a good interface for logging data to places. Logging is a surprisingly hard problem and there are many approaches to doing it. This time, we're going to talk about my favorite logging library in Go that uses my favorite function I've ever written in Go.

Today we're talking about ln, the natural log function. ln works with key value pairs and logs them to somewhere. By default it logs things to standard out. Here is how you use it:

package main

import (


func main() {
  ctx := context.Background()
  ln.Log(ctx, ln.Fmt("hello %s", "world"), ln.F{"demo": "usage"})

ln works with key value pairs called F. This type allows you to log just about anything you want, including custom data types with an Fer. This will let you annotate your data types so that you can automatically extract the important information into your logs while automatically filtering out passwords or other secret data. Here's an example:

type User struct {
  ID       int
  Username string
  Password []byte

func (u User) F() ln.F {
	return ln.F{
		"user_id":       u.ID,
		"user_name": u.Username,

Then if you create that user somehow, you can log the ID and username without logging the password on accident:

var theDude User = abides()

ln.Log(ctx, ln.Info("created new user"), theDude)

This will create a log line that looks something like this:

level=info msg="created new user" user_name="The Dude" user_id=1337

Mara is hacker


You can also put values in contexts! See here for more detail on how this works.

The way this is all glued together is that F itself is an Fer, meaning that the Log/Error functions take a variadic set of Fers. This is where my favorite Go function comes into play, it is the implementation of the Fer interface for F. Here is that function verbatim:

// F makes F an Fer
func (f F) F() F {
	return f

I love how this function looks like some kind of abstract art. This function holds this library together.

If you end up using ln for your projects in the future, please let me know what your experience is like. I would love to make this library the best it can possibly be. It is not a nanosecond scale zero allocation library (I think those kind of things are a bit of a waste of time, because most of the time your logging library is NOT going to be your bottleneck), but it is designed to have very usable defaults and solve the problem good enough that you shouldn't need to care. There are a few useful tools in the ex package nested in ln. The biggest thing is the HTTP middleware, which has saved me a lot of effort when writing web services in Go.

kalama pali pi kulupu Kala

Permalink - Posted on 2020-10-12 00:00

kalama pali pi kulupu Kala

I've wanted to write a novel for a while, and I think I've finally got a solid idea for it. I want to write about the good guys winning against an oppressive system. I've been letting the ideas and thoughts marinate in my heart for a long time; these short stories are how I am exploring the world and other related concepts. I want to use language as a tool in this world. So here is my take on a creation myth for the main species of this world, the Kala (the title of this post roughly translates to "creation story of the Kala").

This is day 2 of my 100 days to offload.

In the beginning, the gods roamed the skies. Pali, Sona and Soweli talked and talked about their plans.

tenpo wan la sewi li lon e sewi. sewi Pali en sewi Sona en sewi Soweli li toki.

Soweli went down to the world Pali had created. Animals of all kinds followed them as Soweli moved about the earth.

sewi Soweli li tawa e sike. soweli li kama e sike.

Sona followed and went towards the whales. Sona took a liking to how graceful they were in the water, and decided to have them be the arbiters of knowledge. Sona also reshaped them to look like the gods did. The Kala people resulted.

sewi Sona li tawa e soweli sike. sewi Sona li tawa e kala suli. sewi Sona li lukin li pona e kala suli. sewi Sona li pana e sona e kon tawa kala suli. sewi Sona li pali e jan kama kala suli. kulupu Kala li lon.

Pali had created the entire world, so Pali fell into a deep slumber in the ocean.

tenpo pini la sewi Pali li pali e sike. sewi Pali li lape lon telo suli.

Soweli had created all of the animals on the whole world, so Soweli fell asleep in Soweli mountain.

tenpo pini la sewi Soweli li pali e soweli ale. sewi Soweli li lape e nena Soweli.

Sona lifted themselves into the skies to watch the Kala from above. Sona keeps an eye on us to make sure we are using their gift responsibly.

sewi Sona li tawa e sewi. sewi Sona li lukin e kulupu Kala. kulupu Kala li jo sona li jo toki. kulupu Kala li pona e sewi Sona.

The Itch

Permalink - Posted on 2020-10-11 00:00

The Itch

I write a lot. I code a lot. This leads to people asking me questions like "how do you have the energy to do that?" or "why do you keep doing that day in and day out?". I was reading this post that I found linked in the Forbidden Orange Site's comments and it really resonated with me.

At the core, I have this deep burning sensation to try things out to see what they are like. It's like this itch deep in me that I can only scratch with writing, coding or sometimes even just answering people's questions in chatrooms. This itch is a catalyst to my productivity. It powers my daily work and makes me able to do what I do in order to make things better for everyone.

However, sometimes the itch isn't there. Sometimes it makes me want to focus on something else. Trying to do something else without the itch empowering me can feel like swimming upstream with heavy chains wrapped around me. My greatest boon is simultaneously my greatest vice.

I don't really know how to handle the days where it's not working. I try to save up my sick and vacation days so that I can avoid burning myself out on the bad days. Things like this are why I am a huge fan of unlimited vacation policies. Unlimited vacation does mean that I get paid out less money when I leave a job; however it means that I have the freedom to have bad days and let the good days tank me through the bad days so that I come out above average.

Trying to explain this to people can feel stressful. Especially to a manager. I've had some bad experiences with that in the past. Phrase this wrong, and some people will hear "I don't want to do this work ever" instead of "I can't do this work today". This especially sucks when deadlines roll in and that vital itch goes away, leaving me at half capacity at the worst possible time.

This itch leads me to set increasing standards on myself too. It's had some negative sides in that it makes me feel like I need to make everything better than the last thing. Each post better than the previous ones. Each project implementation better than the last. Onwards and onwards into a spiral that sets the bar so high I stress myself out trying to approach it.

I haven't kept to my informal goal to have at least one post per week on this blog because of that absurdly high standard I set for myself. I'm going to try and change this. I'm going to start participating in 100 days to offload. Expect some shorter and more focused posts for the immediate future. I am going to be working on the Rust series, however each part of it will be in isolation from here on out instead of the longer multifaceted posts.

This is day 1 of my 100 days to offload.

Also be sure to check out my post on Palisade, a version bumping tool for GitHub repositories.

How Mara Works

Permalink - Posted on 2020-09-30 00:00

How Mara Works

Recently I introduced Mara to this blog and I didn't explain much of the theory and implementation behind them in order to proceed with the rest of the post. There was actually a significant amount of engineering that went into implementing Mara and I'd like to go into detail about this as well as explain how I implemented them into this blog.

Mara's Background

Mara is an anthropomorphic shark. They are nonbinary and go by they/she pronouns. Mara enjoys hacking, swimming and is a Chaotic Good Rogue in the tabletop games I've played her in. Mara was originally made to help test my upcoming tabletop game The Source, and I have used them in a few solitaire tabletop sessions (click here to read the results of one of these).

Mara is hacker


I use a hand-soldered Ergodox with the stenographer layout so I can dab on the haters at 200 words per minute!

The Theory

My blogposts have a habit of getting long, wordy and sometimes pretty damn dry. I notice that there are usually a few common threads in how this becomes the case, so I want to do these three things to help keep things engaging.

  1. I go into detail. A lot of detail. This can make paragraphs long and wordy because there is legitimately a lot to cover. fasterthanlime's Cool Bear's Hot Tip is a good way to help Amos focus on the core and let another character bring up the finer details that may go off the core of the message.
  2. I have been looking into how to integrate concepts from The Socratic method into my posts. The Socratic method focuses on dialogue/questions and answers between interlocutors as a way to explore a topic that can be dry or vague.
  3. Soatok's blog was an inspiration to this. Soatok dives into deep technical topics that can feel like a slog, and inserts some stickers between paragraphs to help keep things upbeat and lively.

I wanted to make a unique way to help break up walls of text using the concepts of Cool Bear's Hot Tip and the Socratic method with some furry art sprinkled in and I eventually arrived at Mara.

Mara is hacker


Fun fact! My name was originally derived from a Buddhist conceptual demon of forces antagonistic to enlightenment which is deliciously ironic given that my role is to help people understand things now.

How Mara is Implemented

I write my blogposts in Markdown, specifically a dialect that has some niceties from GitHub flavored markdown as parsed by comrak. Mara's interjections are actually specially formed links, such as this:

Mara is hacker


Hi! I am saying something!

[Hi! I am saying something!](conversation://Mara/hacker)

Notice how the destination URL doesn't actually exist. It's actually intercepted in my markdown parsing function and then a HTML template is used to create the divs that make up the image and conversation bits. I have intentionally left this open so I can add more characters in the future. I may end up making some stickers for myself so I can reply to Mara a-la this blogpost by fasterthanlime (search for "What's with the @@GLIBC_2.2.5 suffixes?"). The syntax of the URL is as follows:


This will then fetch the images off of my CDN hosted by CloudFlare. However if you are using Tor to view my site, this may result in not being able to see the images. I am working on ways to solve this. Please bear with me, this stuff is hard.

You may have noticed that Mara sometimes has links inside her dialogue. Understandably, this is something that vanilla markdown does not support. However, I enabled putting raw HTML in my markdown which lets this work anyways! Consider this:

Mara is hacker


My art was drawn by Selicre!

In the markdown source, that actually looks like this:

[My art was drawn by <a href="https://selic.re">Selicre</a>!](conversation://Mara/hacker)

This is honestly one of my favorite parts of how this is implemented, though others I have shown this to say it's kind of terrifying.

The <picture> Element and Image Formats

Something you might notice about the HTML template is that I use the <picture> element like this:

    <source srcset="https://cdn.christine.website/file/christine-static/stickers/@character.to_lowercase()/@(mood).avif" type="image/avif">
    <source srcset="https://cdn.christine.website/file/christine-static/stickers/@character.to_lowercase()/@(mood).webp" type="image/webp">
    <img src="https://cdn.christine.website/file/christine-static/stickers/@character.to_lowercase()/@(mood).png" alt="@character is @mood">

The <picture> element allows me to specify multiple versions of the stickers and have your browser pick the image format that it supports. It is also fully backwards compatible with browsers that do not support <picture> and in those cases you will see the fallback image in .png format. I went into a lot of detail about this in a twitter thread, but in short here are how each of the formats looks next to its filesize information:

The avif version does have the ugliest quality when blown up, however consider how small these stickers will appear on the webpages:

Mara is hmm


This is how big the stickers will appear, or is it?

At these sizes most people will not notice any lingering artifacts unless they look closely. However at about 5-6 kilobytes per image I think the smaller filesize greatly wins out. This helps keep page loads fast, which is something I want to optimize for as it makes people think my website loads quickly.

I go into a lot more detail on the twitter thread, but the commands I use to get the webp and avif versions of the stickers are as follows:


cwebp \
      $1.png \
      -o $1.webp
avifenc \
      $1.png \
      -o $1.avif \
      -s 0 \
      -d 8 \
      --min 48 \
      --max 48 \
      --minalpha 48 \
      --maxalpha 48

I plan to automate this further in the future, but for the scale I am at this works fine. These stickers are then uploaded to my cloud storage bucket and CloudFlare provides a CDN for them so they can load very quickly.

Anyways, this is how Mara is implemented and some of the challenges that went into developing them as a feature (while leaving the door open for other characters in the future). Mara is here to stay and I have gotten a lot of positive feedback about her.

As a side note, for those of you that are not amused that I am choosing to have Mara (and consequentially furry art in general) as a site feature, I can only hope that you can learn to respect that as an independent blogger I am free to implement my blog (and the content that I am choosing to provide FOR FREE even though I've gotten requests to make it paid content) as I see fit. Further complaints will only increase the amount of furry art in future posts.

Be well all.

Rust Crates that do What the Go Standard library Does

Permalink - Posted on 2020-09-27 00:00

Rust Crates that do What the Go Standard library Does

One of Go's greatest strengths is how batteries-included the standard library is. You can do most of what you need to do with only the standard library. On the other hand, Rust's standard library is severely lacking by comparison. However, the community has capitalized on this and been working on a bunch of batteries that you can include in your rust projects. I'm going to cover a bunch of them in this post in a few sections.

Mara is hacker


A lot of these are actually used to help make this blog site work!


Go has logging out of the box with package log. Package log is a very uncontroversial logger. It does what it says it does and with little fuss. However it does not include a lot of niceties like logging levels and context-aware values.

In Rust, we have the log crate which is a very simple interface. It uses the error!, warn!, info!, debug! and trace! macros which correlate to the highest and lowest levels. If you want to use log in a Rust crate, you can add it to your Cargo.toml file like this:

log = "0.4"

Then you can use it in your Rust code like this:

use log::{error, warn, info, debug, trace};

fn main() {
  trace!("starting main");
  debug!("debug message");
  info!("this is some information");
  warn!("oh no something bad is about to happen");
  error!("oh no it's an error");

Mara is wat


Wait, where does that log to? I ran that example locally but I didn't see any of the messages anywhere.

This is because the log crate doesn't directly log anything anywhere, it is a facade that other packages build off of. pretty_env_logger is a commonly used crate with the log facade. Let's add it to the program and work from there:

log = "0.4"
pretty_env_logger = "0.4"

Then let's enable it in our code:

use log::{error, warn, info, debug, trace};

fn main() {

  trace!("starting main");
  debug!("debug message");
  info!("this is some information");
  warn!("oh no something bad is about to happen");
  error!("oh no it's an error");

And now let's run it with RUST_LOG=trace:

$ env RUST_LOG=trace cargo run --example logger_test
    Finished dev [unoptimized + debuginfo] target(s) in 0.07s
     Running `/home/cadey/code/christine.website/target/debug/logger_test`
 TRACE logger_test > starting main
 DEBUG logger_test > debug message
 INFO  logger_test > this is some information
 WARN  logger_test > oh no something bad is about to happen
 ERROR logger_test > oh no it's an error

There are many other consumers of the log crate and implementing a consumer is easy should you want to do more than pretty_env_logger can do on its own. However, I have found that pretty_env_logger does just enough on its own. See its documentation for more information.


Go's standard library has the flag package out of the box. This package is incredibly basic, but is surprisingly capable in terms of what you can actually do with it. A common thing to do is use flags for configuration or other options, such as here:

package main

import "flag"

var (
	program      = flag.String("p", "", "h program to compile/run")
	outFname     = flag.String("o", "", "if specified, write the webassembly binary created by -p here")
	watFname     = flag.String("o-wat", "", "if specified, write the uncompiled webassembly created by -p here")
	port         = flag.String("port", "", "HTTP port to listen on")
	writeTao     = flag.Bool("koan", false, "if true, print the h koan and then exit")
	writeVersion = flag.Bool("v", false, "if true, print the version of h and then exit")

This will make a few package-global variables that will contain the values of the command-line arguments.

In Rust, a commonly used command line parsing package is structopt. It works in a bit of a different way than Go's flag package does though. structopt focuses on loading options into a structure rather than into globally mutable variables.

Mara is hacker


Something you may notice in Rust-land is that globally mutable state is talked about as if it is something to be avoided. It's not inherently bad, but it does make things more likely to crash at runtime. In most cases, these global variables with package flag are fine, but only if they are ever written to before the program really starts to do what it needs to do. If they are ever written to and read from dynamically at runtime, then you can get into a lot of problems such as race conditions.

Here's a quick example copied from pa'i:

#[derive(Debug, StructOpt)]
    name = "pa'i",
    about = "A WebAssembly runtime in Rust meeting the Olin ABI."
struct Opt {
    /// Backend
    #[structopt(short, long, default_value = "cranelift")]
    backend: String,

    /// Print syscalls on exit
    #[structopt(short, long)]
    function_log: bool,

    /// Do not cache compiled code?
    #[structopt(short, long)]
    no_cache: bool,

    /// Binary to run
    fname: String,

    /// Main function
    #[structopt(short, long, default_value = "_start")]
    entrypoint: String,

    /// Arguments of the wasm child
    args: Vec<String>,

This has the Rust compiler generate the needed argument parsing code for you, so you can just use the values as normal:

fn main() {
  let opt = Opt::from_args();
  debug!("args: {:?}", opt.args);

You can even handle subcommands with this, such as in palisade. This package should handle just about everything you'd do with the flag package, but will also work for cases where flag falls apart.


Go's standard library has the error interface which lets you create a type that describes why functions fail to do what they intend. Rust has the Error trait which lets you also create a type that describes why functions fail to do what they intend.

In my last post I described eyre and the Result type. However, this time we're going to dive into thiserror for making our own error type. Let's add thiserror to our crate:

thiserror = "1"

And then let's re-implement our DivideByZero error from the last post:

use std::fmt;
use thiserror::Error;

#[derive(Debug, Error)]
struct DivideByZero;

impl fmt::Display for DivideByZero {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        write!(f, "cannot divide by zero")

The compiler made our error instance for us! It can even do that for more complicated error types like this one that wraps a lot of other error cases and error types in maj:

#[derive(thiserror::Error, Debug)]
pub enum Error {
    #[error("TLS error: {0:?}")]
    TLS(#[from] TLSError),

    #[error("URL error: {0:?}")]
    URL(#[from] url::ParseError),

    #[error("Invalid DNS name: {0:?}")]
    InvalidDNSName(#[from] webpki::InvalidDNSNameError),

    #[error("IO error: {0:?}")]
    IO(#[from] std::io::Error),

    #[error("Response parsing error: {0:?}")]
    ResponseParse(#[from] crate::ResponseError),

    #[error("Invalid URL scheme {0:?}")]

Mara is hacker


These #[error("whatever")] annotations will show up when the error message is printed. See here for more information on what details you can include here.

Serialization / Deserialization

Go has JSON encoding/decoding in its standard library via package encoding/json. This allows you to define types that can be read from and write to JSON easily. Let's take this simple JSON object representing a comment from some imaginary API as an example:

  "id": 31337,
  "author": {
    "id": 420,
    "name": "Cadey"
  "body": "hahaha its is an laughter image",
  "in_reply_to": 31335

In Go you could write this as:

type Author struct {
  ID   int    `json:"id"`
  Name string `json:"name"`

type Comment struct {
  ID        int    `json:"id"`
  Author    Author `json:"author"`
  Body      string `json:"body"`
  InReplyTo int    `json:"in_reply_to"`

Rust does not have this capability out of the box, however there is a fantastic framework available known as serde which works across JSON and every other serialization method that you can think of. Let's add serde and its JSON support to our crate:

serde = { version = "1", features = ["derive"] }
serde_json = "1"

Mara is hacker


You might notice that the dependency line for serde is different here. Go's JSON package works by using struct tags as metadata, but Rust doesn't have these. We need to use Rust's derive feature instead.

So, to use serde for our comment type, we would write Rust that looks like this:

use serde::{Deserialize, Serialize};

#[derive(Clone, Debug, Deserialize, Serialize)]
pub struct Author {
  pub id: i32,
  pub name: String,

#[derive(Clone, Debug, Deserialize, Serialize)]
pub struct Comment {
  pub id: i32,
  pub author: Author,
  pub body: String,
  pub in_reply_to: i32,

And then we can load that from JSON using code like this:

fn main() {
  let data = r#"
    "id": 31337,
    "author": {
      "id": 420,
      "name": "Cadey"
    "body": "hahaha its is an laughter image",
    "in_reply_to": 31335
  let c: Comment = serde_json::from_str(data).expect("json to parse");
  println!("comment: {:#?}", c);

And you can use it like this:

$ cargo run --example json
   Compiling xesite v2.0.1 (/home/cadey/code/christine.website)
    Finished dev [unoptimized + debuginfo] target(s) in 0.43s
     Running `target/debug/examples/json`
comment: Comment {
    id: 31337,
    author: Author {
        id: 420,
        name: "Cadey",
    body: "hahaha its is an laughter image",
    in_reply_to: 31335,


Many APIs expose their data over HTTP. Go has the net/http package that acts as a production-grade (Google uses this in production) HTTP client and server. This allows you to get going with new projects very easily. The Rust standard library doesn't have this out of the box, but there are some very convenient crates that can fill in the blanks.


For an HTTP client, we can use reqwest. It can also seamlessly integrate with serde to allow you to parse JSON from HTTP without any issues. Let's add reqwest to our crate as well as tokio to act as an asynchronous runtime:

reqwest = { version = "0.10", features = ["json"] }
tokio = { version = "0.2", features = ["full"] }

Mara is hacker


We need tokio because Rust doesn't ship with an asynchronous runtime by default. Go does as a core part of the standard library (and arguably the language), but tokio is about equivalent to most of the important things that the Go runtime handles for you. This omission may seem annoying, but it makes it easy for you to create a custom asynchronous runtime should you need to.

And then let's integrate with that imaginary comment api at https://xena.greedo.xeserv.us/files/comment.json:

use eyre::Result;
use serde::{Deserialize, Serialize};

#[derive(Clone, Debug, Deserialize, Serialize)]
pub struct Author {
    pub id: i32,
    pub name: String,

#[derive(Clone, Debug, Deserialize, Serialize)]
pub struct Comment {
    pub id: i32,
    pub author: Author,
    pub body: String,
    pub in_reply_to: i32,

async fn main() -> Result<()> {
  let c: Comment = reqwest::get("https://xena.greedo.xeserv.us/files/comment.json")
  println!("comment: {:#?}", c);

And then let's run this:

$ cargo run --example http
   Compiling xesite v2.0.1 (/home/cadey/code/christine.website)
    Finished dev [unoptimized + debuginfo] target(s) in 2.20s
     Running `target/debug/examples/http`
comment: Comment {
    id: 31337,
    author: Author {
        id: 420,
        name: "Cadey",
    body: "hahaha its is an laughter image",
    in_reply_to: 31335,

Mara is hmm


But what if the response status is not 200?

We can change the code to something like this:

let c: Comment = reqwest::get("https://xena.greedo.xeserv.us/files/comment2.json")

And then when we run it we get an error back:

$ cargo run --example http_fail
   Compiling xesite v2.0.1 (/home/cadey/code/christine.website)
    Finished dev [unoptimized + debuginfo] target(s) in 1.84s
     Running `/home/cadey/code/christine.website/target/debug/examples/http_fail`
Error: HTTP status client error (404 Not Found) for url (https://xena.greedo.xeserv.us/files/comment2.json)

This combined with the other features in reqwest give you an very capable HTTP client that does even more than Go's HTTP client does out of the box.


As for HTTP servers though, let's take a look at warp. warp is a HTTP server framework that builds on top of Rust's type system. You can add warp to your dependencies like this:

warp = "0.2"

Let's take a look at its "Hello, World" example:

use warp::Filter;

async fn main() {
    // GET /hello/warp => 200 OK with body "Hello, warp!"
    let hello = warp::path!("hello" / String)
        .map(|name| format!("Hello, {}!", name));

        .run(([127, 0, 0, 1], 3030))

We can then build up multiple routes with its or pattern:

let hello = warp::path!("hello" / String)
    .map(|name| format!("Hello, {}!", name));
let health = warp::path!(".within" / "health")
    .map(|| "OK");
let routes = hello.or(health);

And even inject other datatypes into your handlers with filters such as in the printer facts API server:

let fact = {
    let facts = pfacts::make();
    warp::any().map(move || facts.clone())

let fact_handler = warp::get()

warp is an extremely capable HTTP server and can work across everything you need for production-grade web apps.

Mara is hacker


The blog you are looking at right now is powered by warp!


Go's standard library also includes HTML and plain text templating with its packages html/template and text/template. There are many solutions for templating HTML in Rust, but the one I like the most is ructe. ructe uses Cargo's build.rs feature to generate Rust code for its templates at compile time. This allows your HTML templates to be compiled into the resulting application binary, allowing them to render at ludicrous speeds. To use it, you need to add it to your build-dependencies section of your Cargo.toml:

ructe = { version = "0.12", features = ["warp02"] }

You will also need to add the mime crate to your dependencies because the generated template code will require it at runtime.

mime = "0.3.0"

Once you've done this, create a new folder named templates in your current working directory. Create a file called hello.rs.html and put the following in it:

@(title: String, message: String)


Now add the following to the bottom of your main.rs file:

include!(concat!(env!("OUT_DIR"), "/templates.rs"));

And then use the template like this:

use warp::{http::Response, Filter, Rejection, Reply};

async fn hello_html(message: String) -> Result<impl Reply, Rejection> {
        .html(|o| templates::index_html(o, "Hello".to_string(), message).unwrap().clone()))

And hook it up in your main function:

let hello_html_rt = warp::path!("hello" / "html" / String)
let routes = hello_html_rt.or(health).or(hello);

For a more comprehensive example, check out the printerfacts server. It also shows how to handle 404 responses and other things like that.

Wow, this covered a lot. I've included most of the example code in the examples folder of this site's GitHub repo. I hope it will help you on your journey in Rust. This is documentation that I wish I had when I was learning Rust.

TL;DR Rust

Permalink - Posted on 2020-09-19 00:00

TL;DR Rust

Recently I've been starting to use Rust more and more for larger and larger projects. As things have come up, I realized that I am missing a good reference for common things in Rust as compared to Go. This post contains a quick high-level overview of patterns in Rust and how they compare to patterns in Go. This will focus on code samples. This is no replacement for the Rust book, but should help you get spun up on the various patterns used in Rust code.

Also I'm happy to introduce Mara to the blog!

Mara is hacker


Hey, happy to be here! I'm Mara, a shark hacker from Christine's imagination. I'll interject with side information, challenge assertions and more! Thanks for inviting me!

Let's start somewhere simple: functions.

Making Functions

Functions are defined using fn instead of func:

func foo() {}
fn foo() {}


Arguments can be passed by separating the name from the type with a colon:

func foo(bar int) {}
fn foo(bar: i32) {}


Values can be returned by adding -> Type to the function declaration:

func foo() int {
  return 2
fn foo() -> i32 {
  return 2;

In Rust values can also be returned on the last statement without the return keyword or a terminating semicolon:

fn foo() -> i32 {

Mara is hmm


Hmm, what if I try to do something like this. Will this work?

fn foo() -> i32 {
    if some_cond {

Let's find out! The compiler spits back an error:

error[E0308]: mismatched types
 --> src/lib.rs:3:9
2 | /     if some_cond {
3 | |         2
  | |         ^ expected `()`, found integer
4 | |     }
  | |     -- help: consider using a semicolon here
  | |_____|
  |       expected this to be `()`

This happens because most basic statements in Rust can return values. The best way to fix this would be to move the 4 return into an else block:

fn foo() -> i32 {
    if some_cond {
    } else {

Otherwise, the compiler will think you are trying to use that if as a statement, such as like this:

let val = if some_cond { 2 } else { 4 };

Functions that can fail

The Result type represents things that can fail with specific errors. The eyre Result type represents things that can fail with any error. For readability, this post will use the eyre Result type.

Mara is hacker


The angle brackets in the Result type are arguments to the type, this allows the Result type to work across any type you could imagine.

import "errors"

func divide(x, y int) (int, err) {
  if y == 0 {
    return 0, errors.New("cannot divide by zero")
  return x / y, nil
use eyre::{eyre, Result};

fn divide(x: i32, y: i32) -> Result<i32> {
  match y {
    0 => Err(eyre!("cannot divide by zero")),
    _ => Ok(x / y),

Mara is wat


Huh? I thought Rust had the Error trait, shouldn't you be able to use that instead of a third party package like eyre?

Let's try that, however we will need to make our own error type because the eyre! macro creates its own transient error type on the fly.

First we need to make our own simple error type for a DivideByZero error:

use std::error::Error;
use std::fmt;

struct DivideByZero;

impl fmt::Display for DivideByZero {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        write!(f, "cannot divide by zero")

impl Error for DivideByZero {}

So now let's use it:

fn divide(x: i32, y: i32) -> Result<i32, DivideByZero> {
  match y {
    0 => Err(DivideByZero{}),
    _ => Ok(x / y),

However there is still one thing left: the function returns a DivideByZero error, not any error like the error interface in Go. In order to represent that we need to return something that implements the Error trait:

fn divide(x: i32, y: i32) -> Result<i32, impl Error> {
    // ...

And for the simple case, this will work. However as things get more complicated this simple facade will not work due to reality and its complexities. This is why I am shipping as much as I can out to other packages like eyre or anyhow. Check out this code in the Rust Playground to mess with this code interactively.

Mara is hacker


Pro tip: eyre (via color-eyre) also has support for adding custom sections and context to errors similar to Go's fmt.Errorf %w format argument, which will help in real world applications. When you do need to actually make your own errors, you may want to look into crates like thiserror to help with automatically generating your error implementation.

The ? Operator

In Rust, the ? operator checks for an error in a function call and if there is one, it automatically returns the error and gives you the result of the function if there was no error. This only works in functions that return either an Option or a Result.

Mara is hacker


The Option type isn't shown in very much detail here, but it acts like a "this thing might not exist and it's your responsibility to check" container for any value. The closest analogue in Go is making a pointer to a value or possibly putting a value in an interface{} (which can be annoying to deal with in practice).

func doThing() (int, error) {
  result, err := divide(3, 4)
  if err != nil {
    return 0, err
  return result, nil
use eyre::Result;

fn do_thing() -> Result<i32> {
  let result = divide(3, 4)?;

If the second argument of divide is changed to 0, then do_thing will return an error.

Mara is hmm


And how does that work with eyre?

It works with eyre because eyre has its own error wrapper type called Report, which can represent anything that implements the Error trait.


Rust macros are function calls with ! after their name:

println!("hello, world");


Variables are created using let:

var foo int
var foo = 3
foo := 3
let foo: i32;
let foo = 3;


In Rust, every variable is immutable (unchangeable) by default. If we try to change those variables above we get a compiler error:

fn main() {
    let foo: i32;
    let foo = 3;
    foo = 4;

This makes the compiler return this error:

error[E0384]: cannot assign twice to immutable variable `foo`
 --> src/main.rs:4:5
3 |     let foo = 3;
  |         ---
  |         |
  |         first assignment to `foo`
  |         help: make this binding mutable: `mut foo`
4 |     foo = 4;
  |     ^^^^^^^ cannot assign twice to immutable variable

As the compiler suggests, you can create a mutable variable by adding the mut keyword after the let keyword. There is no analog to this in Go.

let mut foo: i32 = 0;
foo = 4;

Mara is hacker


This is slightly a lie. There's more advanced cases involving interior mutability and other fun stuff like that, however this is a more advanced topic that isn't covered here.


Rust does garbage collection at compile time. It also passes ownership of memory to functions as soon as possible. Lifetimes are how Rust calculates how "long" a given bit of data should exist in the program. Rust will then tell the compiled code to destroy the data from memory as soon as possible.

Mara is hacker


This is slightly inaccurate in order to make this simpler to explain and understand. It's probably more accurate to say that Rust calculates when to collect garbage at compile time, but the difference doesn't really matter for most cases

For example, this code will fail to compile because quo was moved into the second divide call:

let quo = divide(4, 8)?;
let other_quo = divide(quo, 5)?;

// Fails compile because ownership of quo was given to divide to create other_quo
let yet_another_quo = divide(quo, 4)?;

To work around this you can pass a reference to the divide function:

let other_quo = divide(&quo, 5);
let yet_another_quo = divide(&quo, 4)?;

Or even create a clone of it:

let other_quo = divide(quo.clone(), 5);
let yet_another_quo = divide(quo, 4)?;

Mara is hacker


You can also get more fancy with explicit lifetime annotations, however as of Rust's 2018 edition they aren't usually required unless you are doing something weird. This is something that is also covered in more detail in The Rust Book.

Passing Mutability

Sometimes functions need mutable variables. To pass a mutable reference, add &mut before the name of the variable:

let something = do_something_to_quo(&mut quo)?;

Project Setup


External dependencies are declared using the Cargo.toml file:

# Cargo.toml

eyre = "0.6"

This depends on the crate eyre at version 0.6.x.

Mara is hacker


You can do much more with version requirements with cargo, see more here.

Dependencies can also have optional features:

# Cargo.toml

reqwest = { version = "0.10", features = ["json"] }

This depends on the crate reqwest at version 0.10.x with the json feature enabled (in this case it enables reqwest being able to automagically convert things to/from json using Serde).

External dependencies can be used with the use statement:

// go

import "github.com/foo/bar"
use foo; //      -> foo now has the members of crate foo behind the :: operator
use foo::Bar; // -> Bar is now exposed as a type in this file

use eyre::{eyre, Result}; // exposes the eyre! and Result members of eyre

Mara is hacker


This doesn't cover how the module system works, however the post I linked there covers this better than I can.


Async functions may be interrupted to let other things execute as needed. This program uses tokio to handle async tasks. To run an async task and wait for its result, do this:

let printer_fact = reqwest::get("https://printerfacts.cetacean.club/fact")
println!("your printer fact is: {}", printer_fact);

This will populate response with an amusing fact about everyone's favorite household pet, the printer.

To make an async function, add the async keyword before the fn keyword:

async fn get_text(url: String) -> Result<String> {

This can then be called like this:

let printer_fact = get_text("https://printerfacts.cetacean.club/fact").await?;

Public/Private Types and Functions

Rust has three privacy levels for functions:

  • Only visible to the current file (no keyword, lowercase in Go)
  • Visible to anything in the current crate (pub(crate), internal packages in go)
  • Visible to everyone (pub, upper case in Go)

Mara is hacker


You can't get a perfect analog to pub(crate) in Go, but internal packages can get close to this behavior. Additionally you can have a lot more control over access levels than this, see here for more information.


Rust structures are created using the struct keyword:

type Client struct {
  Token string
pub struct Client {
  pub token: String,

If the pub keyword is not specified before a member name, it will not be usable outside the Rust source code file it is defined in:

type Client struct {
  token string
pub(crate) struct Client {
  token: String,

Encoding structs to JSON

serde is used to convert structures to json. The Rust compiler's derive feature is used to automatically implement the conversion logic.

type Response struct {
  Name        string  `json:"name"`
  Description *string `json:"description,omitempty"`
use serde::{Serialize, Deserialize};

#[derive(Serialize, Deserialize, Debug)]
pub(crate) struct Response {
  pub name: String,
  pub description: Option<String>,


Rust has a few string types that do different things. You can read more about this here, but at a high level most projects only uses a few of them:

  • &str, a slice reference to a String owned by someone else
  • String, an owned UTF-8 string
  • PathBuf, a filepath string (encoded in whatever encoding the OS running this code uses for filesystems)

The strings are different types for safety reasons. See the linked blogpost for more detail about this.

Enumerations / Tagged Unions

Enumerations, also known as tagged unions, are a way to specify a superposition of one of a few different kinds of values in one type. A neat way to show them off (along with some other fancy features like the derivation system) is with the structopt crate. There is no easy analog for this in Go.

Mara is hacker


We've actually been dealing with enumerations ever since we touched the Result type earlier. Result and Option are implemented with enumerations.

#[derive(StructOpt, Debug)]
#[structopt(about = "A simple release management tool")]
pub(crate) enum Cmd {
    /// Creates a new release for a git repo
    Cut {
        common: Common,
        /// Changelog location
        #[structopt(long, short, default_value="./CHANGELOG.md")]
        changelog: PathBuf,

    /// Runs releases as triggered by GitHub Actions
    GitHubAction {
        gha: GitHubAction,

Enum variants can be matched using the match keyword:

match cmd {
    Cmd::Cut { common, changelog } => {
        cmd::cut::run(common, changelog).await
    Cmd::GitHubAction { gha } => {

All variants of an enum must be matched in order for the code to compile.

Mara is hacker


This code was borrowed from palisade in order to demonstrate this better. If you want to see these patterns in action, check this repository out!


Test functions need to be marked with the #[test] annotation, then they will be run alongside cargo test:

mod tests { // not required but it is good practice
  fn math_works() {
    assert_eq!(2 + 2, 4);
  #[tokio::test] // needs tokio as a dependency
  async fn http_works() {
    let _ = get_html("https://within.website").await.unwrap();

Avoid the use of unwrap() outside of tests. In the wrong cases, using unwrap() in production code can cause the server to crash and can incur data loss.

Mara is hacker


Alternatively, you can also use the .expect() method instead of .unwrap(). This lets you attach a message that will be shown when the result isn't Ok.

This is by no means comprehensive, see the rust book or Learn X in Y Minutes Where X = Rust for more information. This code is written to be as boring and obvious as possible. If things don't make sense, please reach out and don't be afraid to ask questions.

My Org Mode Flow

Permalink - Posted on 2020-09-08 00:00

My Org Mode Flow

At almost every job I've worked at, at least one of my coworkers has noticed that I use Emacs as my main text editor. People have pointed me at IntelliJ, VS Code, Atom and more, but I keep sticking to Emacs because it has one huge ace up its sleeve that other editors simply cannot match. Emacs has a package that helps me organize my workflow, focus my note-taking and even keep a timeclock for how long I spend working on tasks. This package is called Org mode, and this is my flow for using it.

Org mode is a TODO list manager, document authoring platform and more for GNU Emacs. It uses specially formatted plain text that can be managed using version control systems. I have used it daily for about five years for keeping track of what I need to do for work. Please note that my usage of it barely scratches the surface of what Org mode can do, because this is all I have needed.


My org flow starts with a single folder: ~/org. The main file I use is todo.org and it looks something like this:


* Doing
** TODO WAT-42069 Unfrobnicate the rilkef for flopnax-ropjar push...
* In Review
** TODO WAT-42042 New Relic Dashboards...
* Reviews
** DONE HAX-1337 Security architecture of wasmcloud
* Interrupt
* Generic todo
* Overhead
** 09/08/2020
*** DONE workday start...
*** DONE standup...

Each level of stars creates a new heading level, and these headings can be treated like a tree. You can use the tab key to open and close the heading levels and hide those parts of the tree if they are not relevant. Let's open up the standup subtree with tab:

*** DONE standup
    CLOSED: [2020-09-08 Tue 10:12]
    CLOCK: [2020-09-08 Tue 10:00]--[2020-09-08 Tue 10:12] =>  0:12

Org mode automatically entered in nearly all of the information in this subtree for me. I clocked in (alt-x org-clock-in with that TODO item highighted) when the standup started and I clocked out by marking the task as done (alt-x org-todo with that TODO item highlighted). If I am working on a task that takes longer than one session, I can clock out of it (alt-x org-clock-out) and then the time I spent (about 20 minutes) will be recorded in the file for me. Then I can manually enter the time spent into tools like Jira.

When I am ready to move a task from In Progress to In Review, I close the subtree with tab and then highlight the collapsed subtree, cut it and paste it under the In Review header. This will keep the time tracking information associated with that header entry.

I will tend to let tasks build up over the week and then on Monday morning I will move all of the done tasks to done.org, which is where I store things that are done. As I move things over, I double check with Jira to make sure the time tracking has been accurately updated. This can take a while, but doing this has caught cases where I have misreported time and then had the opportunity to correct it.


Org mode is also able to generate tables based on information in org files. One of the most useful ones is the clock table. You can use these clock tables to make reports about how much time was spent in each task. I use these to help me know what I have done in the day so I can report about it in the next day's standup meeting. To add a clock table, add an empty block for it and press control-c c on the BEGIN line. Here's an example:

#+BEGIN: clocktable :block today

This will show you all of the things you have recorded for that day. This may end up being a bit much if you nest things deep enough. My preferred clock table is a daily view only showing the second level and lower for the current file:

#+BEGIN: clocktable :maxlevel 2 :block today :scope file
#+CAPTION: Clock summary at [2020-09-08 Tue 15:47], for Tuesday, September 08, 2020.
| Headline                    |   Time |      |
| *Total time*                | *6:14* |      |
| In Progress                 |   2:09 |      |
| \_  WAT-42069 Unfrobnica... |        | 2:09 |
| Overhead                    |   4:05 |      |
| \_  09/08/2020              |        | 4:05 |

This allows me to see that I've been working today for about 6.25 hours for the day, so I can use that information when deciding what to do next.

Other Things You Can Do

In the past I used to use org mode for a lot of things. In one of my older files I have a comprehensive list of all of the times I smoked weed down to the amount smoked and what I felt about it at the time. In another I have a script that I used for applying ansible files across a cluster. The sky really is the limit.

However, I have really decided to keep things simple for the most part. I leave org mode for work stuff and mostly use iCloud services for personal stuff. There are mobile apps for using org-mode on the go, but they haven't aged well at all and I have been focusing my time into actually doing things instead of configuring WEBDAV servers or the like.

This is how I keep track of things at work.

The Within Go Repo Layout

Permalink - Posted on 2020-09-07 00:00

The Within Go Repo Layout

Go repository layout is a very different thing compared to other languages. There's a lot of conflicting opinions and little firm guidance to help steer people along a path to more maintainable code. This is a collection of guidelines that help to facilitate understandable and idiomatic Go.

At a high level the following principles should be followed:

  • If the code is designed to be consumed by other random people using that repository, it is made available for others to import
  • If the code is NOT designed to be consumed by other random people using that repository, it is NOT made available for others to import
  • Code should be as close to where it's used as possible
  • Documentation helps understand why, not how
  • More people can reuse your code than you think

Folder Structure

At a minimum, the following folders should be present in the repository:

  • cmd/ -> houses executable commands
  • docs/ -> houses human readable documentation
  • internal/ -> houses code not intended to be used by others
  • scripts/ -> houses any scripts needed for meta-operations

Any additional code can be placed anywhere in the repo as long as it makes sense. More on this later in the document.

Additional Code

If there is code that should be available for other people outside of this project to use, it is better to make it a publicly available (not internal) package. If the code is also used across multiple parts of your program or is only intended for outside use, it should be in the repository root. If not, it should be as close to where it is used as makes sense. Consider this directory layout:

├── cmd
│   ├── paperwork
│   │   ├── create
│   │   │   └── create.go
│   │   └── main.go
│   ├── hospital
│   │   ├── internal
│   │   │   └── operate.go
│   │   └── main.go
│   └── integrator
│       ├── integrate.go
│       └── main.go
├── internal
│   └── log_manipulate.go
└── web
    ├── error.go
    └── instrument.go

This would expose packages repo-root/web and repo-root/cmd/paperwork/create to be consumed by outside users. This would allow reuse of the error handling in package web, but it would not allow reuse of whatever manipulation is done to logging in package repo-root/internal.


This folder has subfolders with go files in them. Each of these subfolders is one command binary. The entrypoint of each command should be main.go so that it is easy to identify in a directory listing. This follows how the go standard library does this.

For example:

└── cmd
    ├── paperwork
    │   └── main.go
    ├── hospital
    │   └── main.go
    └── integrator
        └── main.go

This would be for three commands named paperwork, hospital, and integrate respectively.

As your commands get more complicated, it's tempting to create packages in repo-root/internal/ to implement them. This is probably a bad idea. It's better to create the packages in the same folder as the command, or optionally in its internal package. Consider if paperwork has a command named create, hospital has a command named operate and integrator has a command named integrate:

└── cmd
    ├── paperwork
    │   ├── create
    │   │   └── create.go
    │   └── main.go
    ├── hospital
    │   ├── internal
    │   │   └── operate.go
    │   └── main.go
    └── integrator
        ├── integrate.go
        └── main.go

Each of these commands has the logic separated into different packages.

paperwork has the create command as a subpackage, meaning that other parts of the application can consume that code if they need to.

hospital has the operate command inside its internal package, meaning only cmd/foo/ and anything that has the same import path prefix can use that code. This makes it easier to isolate the code so that other parts of the repo cannot use it.

integrator has the integrate command as a separate go file in the main package of the command. This makes the integrate command code only usable within the command because main packages cannot be imported by other packages.

Each of these methods makes sense in some contexts and not in others. Real-world usage will probably see a mix of these depending on what makes sense.


This folder has human-readable documentation files. These files are intended to help humans understand how to use the program or reasons why the program was put together the way it was. This documentation should be in the language most common to the team of people developing the software.

The structure inside this folder is going to be very organic, so it is not entirely defined here.


The internal folder should house code that others shouldn't consume. This can be for many reasons. Generally if you cannot see a use for this code outside the context of the program you are developing, but it needs to be used across multiple packages in different areas of the repo, it should default to going here.

If the code is safe for public consumption, it should go elsewhere.


The scripts folder should contain each script that is needed for various operations. This could be for running fully automated tests in a docker container or packaging the program for distribution. These files should be documented as makes sense.

Test Code

Code should be tested in the same folder that it's written in. See the upstream testing documentation for more information.

Integration tests or other things should be done in an internal subpackage called "integration" or similar.f

Questions and Answers

Why not use pkg/ for packages you intend others to use?

The name pkg is already well-known in the Go ecosystem. It is the folder that compiled packages (not command binaries) go. Using it creates the potential for confusion between code that others are encouraged to use and the meaning that the Go compiler toolchain has.

If a package prefix for publicly available code is really needed, choose a name not already known to the Go compiler toolchain such as "public".

How does this differ from https://github.com/golang-standards/project-layout?

This differs in a few key ways:

  • Discourages the use of pkg, because it's obvious if something is publicly available or not if it can be imported outside of the package
  • Leaves the development team a lot more agency to decide how to name things

The core philosophy of this layout is that the developers should be able to decide how to put files into the repository.

But I really think I need pkg!

Set up another git repo for those libraries then. If they are so important that other people need to use them, they should probably be in a libraries repo or individual git repos.

Besides, nothing is stopping you from actually using pkg if you want to. Some more experienced go programmers will protest though.

Examples of This in Action

Here are a few examples of views of this layout in action:

Colemak Layout - First Week

Permalink - Posted on 2020-08-22 00:00

Colemak Layout - First Week

A week ago I posted the last post in this series where I announced I was going all colemak all the time. I have not been measuring words per minute (to avoid psyching myself out), but so far my typing speed has gone from intolerably slow to manageably slow. I have been only dipping back into qwerty for two main things:

  1. Passwords, specifically the ones I have in muscle memory
  2. Coding at work that needs to be done fast

Other than that, everything else has been in colemak. I have written DnD-style game notes, hacked at my own "Linux distro", started a few QMK keymaps and more all via colemak.

Here are some of the lessons I've learned:

Let Your Coworkers Know You Are Going to Be Slow

This kind of thing is a long tirm investment. In the short term, your productivity is going to crash through the floor. This will feel frustrating. It took me an entire workday to implement and test a HTTP handler/client for it in Go. You will be making weird typos. Let your coworkers know so they don't jump to the wrong conclusions too quickly.

Also, this goes without saying, but don't do this kind of change during crunch time. That's a bit of a dick move.

Print Out the Layout

I have the layout printed and taped to my monitor and iPad stand. This helps a lot. Instead of looking at the keyboard, I look at the layout image and let my fingers drift into position.

I also have a blank keyboard at my desk, this helps because I can't look at the keycaps and become confused (however this has backfired with typing numbers, lol). This keyboard has cherry MX blues though, which means it can be loud when I get to typing up a storm.

Have Friends Ask You What Layout You Are Using

Something that works for me is to have friends ask me what keyboard layout I am using, so I can be mindful of the change. I have a few people asking me that on the regular, so I can be accountable to them and myself.

macOS and iPadOS have Colemak Out of the Box

The settings app lets you configure colemak input without having to jailbreak or install a custom keyboard layout. Take advantage of this.

Someone has also created a colemak windows package for windows that includes an IA-64 (Itanium) binary. It was last updated in 2004, and still works without hassle on windows 10. It was the irst time I've ever seen an IA-64 windows binary in the wild!

Relearn How To Type Your Passwords

I type passwords from muscle memory. I have had to rediscover what they actually are so I can relearn how to type them.

The colemak experiment continues. I also have a ZSA Moonlander and the kit for a GergoPlex coming in the mail. Both of these run QMK, which allows me to fully program them with a rich macro engine. Here are a few of the macros I plan to use:

// Programming
SUBS(ifErr,     "if err != nil {\n\t\n}", KC_E, KC_I)
SUBS(goTest,    "go test ./...\n",        KC_G, KC_T)
SUBS(cargoTest, "cargo test\n",           KC_C, KC_T)

This will autotype a few common things when I press the keys "ei", "gt", or "ct" at the same time. I plan to add a few more as things turn up so I can more quickly type common idioms or commands to save me time. The if err != nil combination started as a joke, but I bet it will end up being incredibly valuable.

Be well, take care of your hands.

Colemak Layout - Beginning

Permalink - Posted on 2020-08-15 00:00

Colemak Layout - Beginning

I write a lot. On average I write a few kilobytes of text per day. This has been adding up and is taking a huge toll on my hands, especially considering the Covid situation. Something needs to change. I've been working on learning a new keyboard layout: Colemak.

This post will be shorter than most of my posts because I'm writing it with Colemak enabled on my iPad. Writing this is painfully slow at the moment. My sentences are short and choppy because those are easier to type.

I also have a ZSA Moonlander on the way, it should be here in October or November. I will also be sure to write about that once I get it in the mail.

So far, I have about 30 words per minute on the homerow, but once I go off the homerow the speed tanks to less than about five.

However, I am making progress!

Be well all, don't stress your hands out.

The Fear Of Missing Out

Permalink - Posted on 2020-08-02 00:00

The Fear Of Missing Out

Humans have evolved over thousands of years with communities that are small, tight-knit and where it is easy to feel like you know everyone in them. The Internet changes this completely. With the Internet, it's easy to send messages, write articles and even publish books that untold thousands of people can read and interact with. This has lead to an instinctive fear in humanity I'm going to call the Fear of Missing Out [1].

[1]: The Fear of Missing Out

The Internet in its current form capitalizes and makes billions off of this. Infinite scrolling and live updating pages that make it feel like there's always something new to read. Uncountable hours of engineering and psychological testing spent making sure people click and scroll and click and consume all day until that little hit of dopamine becomes its own addiction. We have taken a system for displaying documents and accidentally turned it into a hulking abomination that consumes the souls of all who get trapped in it, crystallizing them in an endless cycle of checking notifications, looking for new posts on your newsfeed, scrolling down to find just that something you think you're looking for.

When I was in high school, I bagged groceries for a store. I also had the opportunity to help customers out to their cars and was able to talk with them. Obviously, I was minimum wage and had a whole bunch of other things to do; however there were a few times that I could really get to talk with regular customers and feel like I got to know them. What comes to mind however is a story where that is not the case. One day I was helping this older woman to her car, and she eventually said something like "All of these people just keep going, going, going nonstop. It drives me mad. How can't they see where they are is good enough already?" I thought for a moment and I wasn't able to come up with a decent reply.

The infinite scrollbars and newsfeeds of the web just keep going, going, going, going, going, going, going and going until the user gives up to do something elses. There's no consideration of how the content is discovered, and why the content is discovered, it's just an endless feed of noise. One subtle change in your worldview after another, just from the headlines alone. Not to mention the endless torrent of advertising.

However, I think there may be a way out, a kind of detox from the infinite scrolling, newsfeeds, notifications and the like for the internet, and I think a good step towards that is the Gemini [2] protocol.

[2]: Gemini Protocol

Gemini is a protocol that is somewhere between HTTP and Gopher. A user sends a request to a Gemini server and the user gets a response back. This response could be anything, but a little header tells the client what kind of data it is. There's also a little markup format that's a very lightweight take on markdown [3], but overall the entire goal of the project is to be minimal and just serve documents.

[3]: Gemtext markup

I've noticed something as I browse through the known constellation of Gemini capsules though. I keep refreshing the CAPCOM feed of posts. I keep refreshing the mailing list archives. I keep refreshing my email client, looking for new content and feel frustrated when it doesn't show up like I expect it to. I'm addicted to the newsfeeds. I'm caught in the trap that autoplay put me in. I'm a victim to infinite scrolling and that constant little hit of dopamine that modern social media has put on us all. Realizing this feels like I am realizing an addiction to a drug (but I'd argue that it somewhat is a drug, by design, what better way to get people to be exposed to ads than to make the service that serves the ads addictive!).

I'm not sure how to best combat this. It feels kind of scary. I'm starting to attempt to detox though. I'm writing a lot more on my Gemini capsule [4] [5]. I'm starting to really consider the Fear of Missing Out when I design and implement things in the future. So many things update instantly on the modern internet, it may be a good idea to attempt to make something that updates weekly or even monthly.

[4]: My Gemini capsule
[5]: [experimental] My Gemini capsule over HTTP

I'm still going to attempt a few ideas that I have regarding long term archival of the Gemini constellation, but I'm definitely going to make sure that I take the time to actually consider the consequences of my actions and what kind of world it creates. I want to create the kind of world that enables people to better themselves.

Let's work together to detox from the harmful effects of what we all have created. I'm considering opening up a Gemini server that other people can have accounts on and write about things that interest them.

If you want to get started with Gemini, I suggest taking a look at the main site through the Gemini to HTTP proxy [6]. There are some clients listed in the pages there, including a very good iOS client that is currently in TestFlight. Please do keep in mind that Gemini is very much a back-button navigation kind of experience. The web has made people expect navigation links to be everywhere, which can make it a weird/jarring experience at first, but you get used to it. You can see evidence of this in my site with all the "Go back" links on each page. I'll remove those at some point, but for now I'm going to keep them.

[6]: Project Gemini

Don't be afraid of missing out. It's inevitable. Things happen. It's okay for them to happen without you having to see them. They will still be there when you look again.

Book Release: Musings from Within

Permalink - Posted on 2020-07-28 00:00

Book Release: Musings from Within

I am happy to announce that I have successfully created an eBook compilation of the best of the posts on this blog plus a bunch of writing I have never before made public, and the result is now available for purchase on itch.io and the Kindle Store (TODO(Xe): add kindle link here when it gets approved) for USD$5. This book is the product of 5 years of effort writing, getting better at writing, failing at writing and everything inbetween.

I have collected the following essays, poems, recipes and stories:

  • Against Label Permanence
  • A Letter to Those Who Bullied Me
  • All There is is Now
  • Alone
  • Barrier
  • Bricks
  • Chaos Magick Debugging
  • Chicken Stir Fry
  • Creator’s Mission
  • Death
  • Died to Save Me
  • Don't Look Into the Light
  • Every Koan Ever
  • Final Chapter
  • Gratitude
  • h
  • How HTTP Requests Work
  • Humanity
  • I Love
  • Instant Pot Quinoa Taco Bowls
  • Instant Pot Spaghetti
  • I Put Words on this Webpage so You Have to Listen to Me Now
  • I Remember
  • It Is Free
  • Listen to Your Rubber Duck
  • MrBeast is Postmodern Gold
  • My Experience Cursing Out God
  • Narrative of Sickness
  • One Day
  • Plurality-Driven Development
  • Practical Kasmakfa
  • Questions
  • Second Go Around
  • Self
  • Sorting Time
  • Tarot for Hackers
  • The Gears and The Gods
  • The Origin of h
  • The Service is Already Down
  • The Story of Hol
  • The Sumerian Creation Myth
  • Toast Sandwich Recipe
  • Untitled Cyberpunk Furry Story
  • We Exist
  • What It’s Like to Be Me
  • When Then Zen
  • When Then Zen: Anapana
  • When Then Zen: Wonderland Immersion
  • You Are Fine

Most of these are available on this site, but a good portion of them are not available anywhere else. There's poetry about shamanism, stories about reincarnation, koans and more.

I am also uploading eBook files to my Patreon page, anyone who supports me for $1 or more has immediate access to the DRM-free ePub, MOBIPocket and PDF files of this book.

If you are facing financial difficulties, want to read my book and just simply cannot afford it, please contact me and I will send you my book free of charge.

Feedback and reviews of this book are more than welcome. If you decide to tweet or toot about it, please use the hashtag #musingsfromwithin so I can collect them into future updates to the description of the store pages, as well as assemble them below.

Enjoy the book! My hope is that you get as much from it as I've gotten from writing these things for the last 5 or so years. Here's to five more. I'll likely create another anthology/collection of them at that point.

RSS/Atom Feeds Fixed and Announcing my Flight Journal

Permalink - Posted on 2020-07-26 00:00

RSS/Atom Feeds Fixed and Announcing my Flight Journal

I have released version 2.0.1 of this site's code. With it I have fixed the RSS and Atom feed generation. For now I have had to sacrifice the post content being in the feed, but I will bring it back as soon as possible.

Victory badges:

Valid Atom Feed Valid RSS Feed

Thanks to W3Schools for having a minimal example of an RSS feed and this Flickr image for expanding it so I can have the post dates be included too.

Flight Journal

I have created a Gemini protocol server at gemini://cetacean.club. Gemini is an exploration of the space between Gopher and HTTP. Right now my site doesn't have much on it, but I have added its feed to my feeds page.

Please note that the content on this Gemini site is going to be of a much more personal nature compared to the more professional kind of content I put on this blog. Please keep this in mind before casting judgement or making any kind of conclusions about me.

If you don't have a Gemini client installed, you can view the site content here. I plan to make a HTTP frontend to this site once I get Maj up and functional.


I have created a Gemini client and server framework for Rust programs called Maj. Right now it includes the following features:

  • Synchronous client
  • Asynchronous server framework
  • Gemini response parser
  • text/gemini parser

Additionally, I have a few projects in progress for the Maj ecosystem:

  • majc - an interactive curses client for Gemini
  • majd - An advanced reverse proxy and Lua handler daemon for people running Gemini servers
  • majsite - A simple example of the maj server framework in action

I will write more about this in the future when I have more than just this little preview of what is to come implemented. However, here's a screenshot of majc rendering my flight journal:

majc preview image rendering cetacean.club

Site Update: Rewrite in Rust

Permalink - Posted on 2020-07-16 00:00

Site Update: Rewrite in Rust

Hello there! You are reading this post thanks to a lot of effort, research and consultation that has resulted in a complete from-scratch rewrite of this website in Rust. The original implementation in Go is available here should anyone want to reference that for any reason.

If you find any issues with the RSS feed, Atom feed or JSONFeed, please let me know as soon as possible so I can fix them.

This website stands on the shoulder of giants. Here are just a few of those and how they add up into this whole package.


All of my posts are written in markdown. comrak is a markdown parser written by a friend of mine that is as fast and as correct as possible. comrak does the job of turning all of that markdown (over 150 files at the time of writing this post) into the HTML that you are reading right now. It also supports a lot of common markdown extensions, which I use heavily in my posts.


warp is the web framework I use for Rust. It gives users a set of filters that add up into entire web applications. For an example, see this example from its readme:

use warp::Filter;

async fn main() {
    // GET /hello/warp => 200 OK with body "Hello, warp!"
    let hello = warp::path!("hello" / String)
        .map(|name| format!("Hello, {}!", name));

        .run(([127, 0, 0, 1], 3030))

This can then be built up into something like this:

let site = index
    // ...

which is the actual routing setup for this website!


In the previous version of this site, I used Go's html/template. Rust does not have an equivalent of html/template in its standard library. After some research, I settled on ructe for the HTML templates. ructe works by preprocessing templates using a little domain-specific language that compiles down to Rust source code. This makes the templates become optimized with the rest of the program and enables my website to render most pages in less than 100 microseconds. Here is an example template (the one for /patrons):

@use patreon::Users;
@use super::{header_html, footer_html};

@(users: Users)

@:header_html(Some("Patrons"), None)


<p>These awesome people donate to me on <a href="https://patreon.com/cadey">Patreon</a>.
If you would like to show up in this list, please donate to me on Patreon. This
is refreshed every time the site is deployed.</p>

        @for user in users {


The templates compile down to Rust, which lets me include other parts of the program into the templates. Here I use that to take a list of users from the incredibly hacky Patreon API client I wrote for this website and iterate over it, making a list of every patron by name.

Build Process

As a nice side effect of this rewrite, my website is now completely built using Nix. This allows the website to be built reproducibly, as well as a full development environment setup for free for anyone that checks out the repo and runs nix-shell. Check out naersk for the secret sauce that enables my docker image build. See this blogpost for more information about this build process (though my site uses GitHub Actions instead of Drone).

jsonfeed Go package

I used to have a JSONFeed package publicly visible at the go import path christine.website/jsonfeed. As far as I know I'm the only person who ended up using it; but in case there are any private repos that I don't know about depending on it, I have made the jsonfeed package available at its old location as well as its source code here. You may have to update your go.mod file to import christine.website/jsonfeed instead of christine.website. If something ends up going wrong as a result of this, please file a GitHub issue here and I can attempt to assist further.

go_vanity crate

I have written a small go vanity import crate and exposed it in my Git repo. If you want to use it, add it to your Cargo.toml like this:

go_vanity = { git = "https://github.com/Xe/site", branch = "master" }

You can then use it from any warp application by calling go_vanity::github or go_vanity::gitea like this:

let go_vanity_jsonfeed = warp::path("jsonfeed")
    .and(warp::any().map(move || "christine.website/jsonfeed"))
    .and(warp::any().map(move || "https://tulpa.dev/Xe/jsonfeed"))

I plan to add full documentation to this crate soon as well as release it properly on crates.io.

patreon crate

I have also written a small Patreon API client and made it available in my Git repo. If you want to use it, add it to your Cargo.toml like this:

patreon = { git = "https://github.com/Xe/site", branch = "master" }

This client is incredibly limited and only supports the minimum parts of the Patreon API that are required for my website to function. Patreon has also apparently started to phase out support for its API anyways, so I don't know how long this will be useful.

But this is there should you need it!

Dhall Kubernetes Manifest

I also took the time to port the kubernetes manifest to Dhall. This allows me to have a type-safe kubernetes manifest that will correctly have all of the secrets injected for me from the environment of the deploy script.

These are the biggest giants that my website now sits on. The code for this rewrite is still a bit messy. I'm working on making it better, but my goal is to have this website's code shine as an example of how to best write this kind of website in Rust. Check out the code here.

Continuous Deployment to Kubernetes with Gitea and Drone

Permalink - Posted on 2020-07-10 00:00

Continuous Deployment to Kubernetes with Gitea and Drone

Recently I put a complete rewrite of the printerfacts server into service based on warp. I have it set up to automatically be deployed to my Kubernetes cluster on every commit to its source repo. I'm going to explain how this works and how I set it up.


One of the first elements in this is Nix. I use Nix to build reproducible docker images of the printerfacts server, as well as managing my own developer tooling locally. I also pull in the following packages from GitHub:

  • naersk - an automagic builder for Rust crates that is friendly to the nix store
  • gruvbox-css - the CSS file that the printerfacts service uses
  • nixpkgs - contains definitions for the base packages of the system

These are tracked using niv, which allows me to store these dependencies in the global nix store for free. This lets them be reused and deduplicated as they need to be.

Next, I made a build script for the printerfacts service that builds on top of these in printerfacts.nix:

{ sources ? import ./nix/sources.nix, pkgs ? import <nixpkgs> { } }:
  srcNoTarget = dir:
    (path: type: type != "directory" || builtins.baseNameOf path != "target")
  src = srcNoTarget ./.;

  naersk = pkgs.callPackage sources.naersk { };
  gruvbox-css = pkgs.callPackage sources.gruvbox-css { };
  pfacts = naersk.buildPackage {
    inherit src;
    remapPathPrefix = true;
in pkgs.stdenv.mkDerivation {
  inherit (pfacts) name;
  inherit src;
  phases = "installPhase";

  installPhase = ''
    mkdir -p $out/static

    cp -rf $src/templates $out/templates
    cp -rf ${pfacts}/bin $out/bin
    cp -rf ${gruvbox-css}/gruvbox.css $out/static/gruvbox.css

And finally a simple docker image builder in default.nix:

{ system ? builtins.currentSystem }:

  sources = import ./nix/sources.nix;
  pkgs = import <nixpkgs> { };
  printerfacts = pkgs.callPackage ./printerfacts.nix { };

  name = "xena/printerfacts";
  tag = "latest";

in pkgs.dockerTools.buildLayeredImage {
  inherit name tag;
  contents = [ printerfacts ];

  config = {
    Cmd = [ "${printerfacts}/bin/printerfacts" ];
    Env = [ "RUST_LOG=info" ];
    WorkingDir = "/";

This creates a docker image with only the printerfacts service in it and any dependencies that are absolutely required for the service to function. Each dependency is also split into its own docker layer so that it is much more efficient on docker caches, which translates into faster start times on existing servers. Here are the layers needed for the printerfacts service to function:

  • libunistring - Unicode-safe string manipulation library
  • libidn2 - An internationalized domain name decoder
  • glibc - A core library for C programs to interface with the Linux kernel
  • The printerfacts binary/templates

That's it. It packs all of this into an image that is 13 megabytes when compressed.


Now that we have a way to make a docker image, let's look how I use drone.io to build and push this image to the Docker Hub.

I have a drone manifest that looks like this:

kind: pipeline
name: docker
  - name: build docker image
    image: "monacoremo/nix:2020-04-05-05f09348-circleci"
      USER: root
      - cachix use xe
      - nix-build
      - cp $(readlink result) /result/docker.tgz
      - name: image
        path: /result

  - name: push docker image
    image: docker:dind
      - name: image
        path: /result
      - name: dockersock
        path: /var/run/docker.sock
      - docker load -i /result/docker.tgz
      - docker tag xena/printerfacts:latest xena/printerfacts:$DRONE_COMMIT_SHA
      - echo $DOCKER_PASSWORD | docker login -u $DOCKER_USERNAME --password-stdin
      - docker push xena/printerfacts:$DRONE_COMMIT_SHA
        from_secret: DOCKER_PASSWORD

  - name: kubenetes release
    image: "monacoremo/nix:2020-04-05-05f09348-circleci"
      USER: root
        from_secret: DIGITALOCEAN_ACCESS_TOKEN
      - nix-env -i -f ./nix/dhall.nix
      - ./scripts/release.sh

  - name: image
    temp: {}
  - name: dockersock
      path: /var/run/docker.sock

This is a lot, so let's break it up into the individual parts.


Drone steps normally don't have access to a docker daemon, privileged mode or host-mounted paths. I configured the cadey/printerfacts job with the following settings:

  • I enabled Trusted mode so that the build could use the host docker daemon to build docker images
  • I added the DIGITALOCEAN_ACCESS_TOKEN and DOCKER_PASSWORD secrets containing a Digital Ocean API token and a Docker hub password

I then set up the volumes block to create a few things:

  - name: image
    temp: {}
  - name: dockersock
      path: /var/run/docker.sock
  • A temporary folder to store the docker image after Nix builds it
  • The docker daemon socket from the host

Now we can get to the building the docker image.

Docker Image Build

I use this docker image to build with Nix on my Drone setup. As of the time of writing this post, the most recent tag of this image is monacoremo/nix:2020-04-05-05f09348-circleci. This image has a core setup of Nix and a few userspace tools so that it works in CI tooling. In this step, I do a few things:

name: build docker image
image: "monacoremo/nix:2020-04-05-05f09348-circleci"
  USER: root
  - cachix use xe
  - nix-build
  - cp $(readlink result) /result/docker.tgz
  - name: image
    path: /result

I first activate my cachix cache so that any pre-built parts of this setup can be fetched from the cache instead of rebuilt from source or fetched from crates.io. This makes the builds slightly faster in my limited testing.

Then I build the docker image with nix-build (nix-build defaults to default.nix when a filename is not specified, which is where the docker build is defined in this case) and copy the resulting tarball to that shared temporary folder I mentioned earlier. This lets me build the docker image without needing a docker daemon or any other special permissions on the host.


The next step pushes this newly created docker image to the Docker Hub:

name: push docker image
image: docker:dind
  - name: image
    path: /result
  - name: dockersock
    path: /var/run/docker.sock
  - docker load -i /result/docker.tgz
  - docker tag xena/printerfacts:latest xena/printerfacts:$DRONE_COMMIT_SHA
  - echo $DOCKER_PASSWORD | docker login -u $DOCKER_USERNAME --password-stdin
  - docker push xena/printerfacts:$DRONE_COMMIT_SHA
    from_secret: DOCKER_PASSWORD

First it loads the docker image from that shared folder into the docker daemon as xena/printerfacts:latest. This image is then tagged with the relevant git commit using the magic $DRONE_COMMIT_SHA variable that Drone defines for you.

In order to push docker images, you need to log into the Docker Hub. I log in using this method in order to avoid the chance that the docker password will be leaked to the build logs.

echo $DOCKER_PASSWORD | docker login -u $DOCKER_USERNAME --password-stdin

Then the image is pushed to the Docker hub and we can get onto the deployment step.

Deploying to Kubernetes

The deploy step does two small things. First, it installs dhall-yaml for generating the Kubernetes manifest (see here) and then runs scripts/release.sh:

#!/usr/bin/env nix-shell
#! nix-shell -p doctl -p kubectl -i bash

doctl kubernetes cluster kubeconfig save kubermemes
dhall-to-yaml-ng < ./printerfacts.dhall | kubectl apply -n apps -f -
kubectl rollout status -n apps deployment/printerfacts

This uses the nix-shell shebang support to automatically set up the following tools:

  • doctl to log into kubernetes
  • kubectl to actually deploy the site

Then it logs into kubernetes (my cluster is real-life unironically named kubermemes), applies the generated manifest (which looks something like this) and makes sure the deployment rolls out successfully.

This will have the kubernetes cluster automatically roll out new versions of the service and maintain at least two active replicas of the service. This will make sure that you users can always have access to high-quality printer facts, even if one or more of the kubernetes nodes go down.

And that is how I continuously deploy things on my Gitea server to Kubernetes using Drone, Dhall and Nix.

If you want to integrate the printer facts service into your application, use the /fact route on it:

$ curl https://printerfacts.cetacean.club/fact
A printer has a total of 24 whiskers, 4 rows of whiskers on each side. The upper
two rows can move independently of the bottom two rows.

There is currently no rate limit to this API. Please do not make me have to create one.

The Dwarven Cavern - A Beginner 6E Adventure

Permalink - Posted on 2020-06-28 00:00

The Dwarven Cavern - A Beginner 6E Adventure

Recently itch.io had one of the largest game bundles in history and one of the things in it was this humble game named 6E. Some friends and I have started up a small group that meets on the weekends to spend a few hours with an adventure. I've been writing a few adventures for them, and I would like to start sharing their archetypes. These will all be included in a small zine that describes the systems we have built on top of 6E that I'm calling The Source. This PDF will be available publicly once it is closer to done (however if you really want a copy early on to dig at it, let me know and we can surely work something out).

Today, I would like to share the details that went into writing the most recent adventure: The Dwarven Cavern. This was derived from One Page Dungeons by Geoffrey Cullop, specifically this is a variant of Kobold Caverns on page 5. Please note that the experience, gold and hitpoints of enemies are balanced for the group I play with, and will probably need to be adjusted for other parties of adventurers. This should work for players at level 1-3. By the end they should gain enough experience to level up once. My group is also very stun-heavy, so that makes my job of attempting to balance things really interesting.

Like most great adventures, this starts at a humble tavern, The Flying Ombudsman.

The Flying Ombudsman

The players start off in The Flying Ombudsman, a tavern in the town of LAST_TOWN_YOUR_PLAYERS_WERE_IN. There are a few people sitting at the bar and drinking steamy mugs of grog. There is a salesperson sitting at one of the tables fidgeting with a golden scepter head and looks like she has plenty of items should people want them. There is someone rather sad sitting at another of the tables, looking like he has suffered a great loss.

For extra immersion, have the NPC's speak with a slightly Irish accent. For extra fun, throw in random words from scottish, english and australian accents. Keeps the players thinking.

When players ask the bartender for a mug o' grog, he will sell them one for 5 gold (limit 4 each player). If players ask for the history behind the name, explain to the players the information in The Story of Hol below.

The salesperson at the top sells the following:

Name Effect Price
3x potion of cure poison Cures poison and grants 2hp 15g each
Golden Scepter head Doesn't look like it does anything if it's not on top of a staff 50g

You may want to add a few more items to this list. I need to draw up a table for items that salespeople like this can have.

The salesperson can also get you a room at the inn for 30g. It is big enough to hold all of the party.

When players talk to the person at the bottom, they get a sad story about the hoe-pocalypse that threatens the end of the village the person comes from. Dwarves snuck up from underground and stole the hoe from him, making it difficult to feed his hamlet. He asks the adventurers to go to the cavern the dwarves live in and get them back. The players should eventually agree to do it, and the NPC will progressively offer more and more gold as a reward. A charisma check will get him to throw in some salted meat as a reward.

The Story of Hol

The bartender previously ran a failing tavern, and was running out of hope and money. One day, the bartender found a weird looking mug and found that it could be used to talk with the gods. He eventually found a god named Hol. Hol offered him the recipe for the strongest alcohol he could possibly make, for a price. The bartender agreed without further thought and gained the ability to make intensely strong alcohol, but lost his ability to negotiate permanently.

One day the local ombudsman got news of how strong the liquor was and decided to enforce some obscure liquor control law to get the strength reduced. The bartender refused to negotiate and it escalated into a situation where the bartender kicked the ombudsman so hard that he flew into the wall. People started calling the tavern "the place where the ombudsman flew" and that eventually evolved into the name The Flying Ombudsman.

Business has been booming ever since.

The Cave

The first part has 1 mechanical trap around the second corner, players will need to work their way through it with low light or to grab a working torch off the wall. The trap does 1d4 damage.

The tunnel opens into a guard station, the dwarf archers wait behind cover for players to get in range of their shortbows. If they get within 10 feet of the archers, they fall back to the room behind them to the left. The Archers have 6 hp and grant 100xp on defeat.

The room behind and to the left is a labratory that is very well-lit. Lots of weird liquid in flasks, scientific equipment, etc. There is a Dwarf Grenadier there and will attack you on sight. He throws alchemical grenades at range (1d6 damage, =6 -> random elemental effect), If the players get within melee range of the Grenadier then he will use a poisoned dagger. If the guards retreated then he will use them as meat shields. He has 25 hp. He drops 1d4 alchemical grenades and also drops his poisoned dagger. He grants 300xp on defeat.

The bigger chamber is the dwarven living area. Plenty of beds to hide behind, but there are 1d6 dwarves living in there. They attack on sight, but flee south if their attack goes poorly. The dwarves each grant 100xp on defeat. There is a chest in there that is actually a mimic. It drops an Amulet of Chest on defeat.

The room south is the home of Bubba the Bugbear, a hired goon of the dwarves. Bubba and the remaining dwarves make their last stand here. Bubba grants 500xp on defeat and has 30hp and 1d6+2 attack damage with his giant club.

The final room is the treasure room, which contains the stolen hoe, some other farming equipment and 250 gold. After this the dungeon is cleared and players gain experience from their journey, then go back to The Flying Ombudsman victorious. The quest NPC will award them with however many gold they agreed to and any items they also agreed to.


Dwarf Archer

Decent guard, horrible foresight

  • Hit Points 6
0 +1 0 -2 -2 0
  • Condition Immunities drunk, groggy
  • Senses Night-vision
  • Languages Dwarf
  • Challenge 2 (100xp)

Keep-away. Will flee at the first sign of trouble.


Crossbow. Ranged Attack: 1d4 damage

Dwarf Grenadier

Mad scientist, crazier inventions

  • Hit Points 25
0 +1 -2 +2 0 +1
  • Condition Immunities drunk, groggy
  • Senses Night-vision
  • Languages Dwarf
  • Challenge 6 (300xp)

Insane. Will do things that normal enemies would not.


Alchemical Grenade. Ranged Attack: 1d6 damage plus elemental effect if 6 damage. See table for Alchemical Grenades.

Poisoned Dagger. Melee Attack: 1d4 damage. On hit player must roll for constitution. If the check fails they get disadvantage for their throws the next 1d3 turns.


Underground folk that love digging

Hit Points 6

0 +1 0 -2 -2 0
  • Condition Immunities drunk, groggy
  • Senses Night-vision
  • Languages Dwarf
  • Challenge 2 (100xp)

Short and Stout. Easily able to hit enemies below the belt.


Dagger. Melee Attack: 1d4 damage


Just an ordinary chest, don't question it

  • Hit Points 14
0 0 -2 0 0 -1
  • Condition Immunities burn, poison
  • Languages None
  • Challenge 5 (250xp)

Unsuspicious. Come on, the chest wouldn't be alive, would it?


Omnomnom Melee Attack: 1d6 piercing damage, gets advantage if the player tries to open the chest.

Bubba the Bugbear

From Bubba with Love

  • Hit Points 30
+2 +2 +1 -2 0 -1
  • Condition Immunities blinded
  • Languages Dwarf
  • Challenge 10 (500xp)

Unreasonable. Does not respond well to trickery.


Whomp. Melee Attack: 1d6+2 damage, piercing if the player fails a constitution check.


These are the unique items specific to this quest.

Mug of Grog

A wooden/iron mug full of the barkeep's grog. Consuming it gives you the following stat boosts for 3 turns:

+0 -1 +0 -1 -1 +3

This bonus does not stack. When you drink the grog, you keep the mug and can use it to bludgeon people for normal attack damage. It can also be used as a tool to check for traps.

Does not sell, purchasable for 5g.

Golden Scepter Head

A golden scepter head that looks like The Grand Nagus from Star Trek. It has no effect outside of when it's put on a staff. When it is on a staff it gives you 25% more gold when you collect gold from places. This has 5 charges and cannot be refreshed.

Sells for 50g, purchases for 50g.

Alchemical Grenades

Standard grenades that look like they are made out of wood, metal, insanity and magic. They do d6 damage normally, but when you crit with one it also causes one of the following status effects (roll a d4):

Alchemical Effects

Roll Effect
1 Acid makes the entity take off their armor
2 The entity rolls a constitution check, if it fails they get poisoned and need to roll for constitution before every action, passing removes the poison, failing makes them not have any action.
3 The entity is burned for 1d4 damage every turn they fail a constitution check.
4 The entity is stunned for 1d3 turns.

Sells for 25g each, unpurchaseable.

Poisoned Dagger

A standard dagger that gives no distinct bonuses. However when you use it as a normal dagger, you need to roll for constitution in addition to rolling for strength. If you fail the constitution check (but pass the strength check), you get poisoned from the poison streaking out of the dagger.

The poison has 5 charges.

Merchants do not want to take the risk buying it and will have you pay them for its disposal. This cannot be purchased.

Bubba's Clubba

A rather large mace that does +2 damage. It is a giant thing designed for a 9-foot tall centaur. It takes up three slots in your inventory. It looks imposing and may require a lot of strength to use properly.

Sells for 25g.

Amulet of Chest

This cursed amulet lets the holder transform into a harmless looking treasure chest, but also grants them advantage if an enemy tries to open it. After 4 transformations, they need to roll a charisma check when they try to turn back. If that fails, they stay a chest for an hour (with arms/legs/a mouth/etc). After 4 more transformations, they turn into a mimic permanently. The transformation count stays persistent until someone becomes a mimic, then it resets to zero.

Merchants will pay 100 gold for it, but will only accept it after the player rolls for charisma.

Dungeon Map

Adventure Highlights

When I ran this yesterday, the following amazing things happened:

  • The players used a crossbow from a previous adventure and a torch to hit the grenades held by the grenadier, vaporizing it from 5 grenades exploding at once.
  • The Artificer shield-bashed one of the dwarves so hard that it ricocheted into the other dwarves like pool balls. That caused a lot of damage to all of the enemies.
  • The Monk critted a Ki art and shredded an archer with their claws, making the other archer flee.
  • The Thief straight up banished a dwarf to the shadow realm with a slingshot hit.

I still wonder how they are going to use that Cursed Amulet of Chest though.

Thank you @infinite_mao for making 6E. I can only hope that my buying your stuff separately and making this content for 6E can help give back to the community.

Feedback on the balance of this is very welcome. I openly welcome any and all feedback about how this quest could be rebalanced to be a bit less lopsided in favor of the players. I also wanted to err on the side of balancing towards the players to avoid an unwanted party death.

V Update - June 2020

Permalink - Posted on 2020-06-17 00:00

V Update - June 2020

Every so often I like to check in on the V Programming Language. It's been about six months since my last post, so I thought I'd take another look at it and see what progress has been done in six months.

Last time I checked, V 0.2 was slated for release in December 2019. It is currently June 2020, and the latest release (at time of writing) is 0.1.27.

Feature Updates

Interestingly, the V author seems to have walked back one of their original listed features of V and now has an abstract syntax tree for representing the grammar of the language. They still claim that functions are "pure" by default, but allow functions to perform print statements while still being "pure". Printing data to standard out is an impure side effect, but if you constrain the definition of "side effects" to only include mutability of memory, this could be fine. There seems to be an issue about this on the github tracker, but it was closed.

The next stable release 0.2 seems to be planned for June 2020 (according to the readme); and according to the todo list in the repo, memory management seems to be one of the things that will be finished. V is also apparently in alpha, but will also apparently jump from alpha directly to stable? Given the track record of constantly missed release windows, I am not very confident that V 0.2 will be released on time.

Tools like this need to be ready when they are ready. Trying to rush things is a very unproductive thing to do and can result in more net harm than good.


Testing V is a bit more difficult for me now as its build process is incompatible with my Linux tower's NixOS install (I tend to try and package all the programs I use for testing this stuff so it is easier to reproduce my environment on other machines). The V scripts also do not work on my NixOS tower because it doesn't have a /usr/local/bin. The correct way to make a shell script cross-platform is to use the following header:

#!/usr/bin/env v

This makes the env program search for the V binary in your $PATH, and will function correctly on all platforms (this may not work on environments like Termux due to limitations of how Android works, but it will solve 99% of cases. I am unsure how to make a shell script that will function properly across Android and non-Android environments).

The Makefile in the V source tree seems to do network calls, specifically a git clone. Remember that this is on the front page of the website:

V can be bootstrapped in under a second by compiling its code translated to C with a simple

cc v.c

No libraries or dependencies needed.

Git is a dependency, which means perl is a dependency, which means a shell is a dependency, which means glibc is a dependency, which means that a lot of other things (including posix threads) are also dependencies. Pedantically, you could even go as far as saying that you could count the Linux kernel, the processor being used and the like as dependencies, but that's a bit out of scope for this.

I claim that the V compiler has dependencies because it requires other libraries or programs in order to function. For an example, see the output of ldd (a program that lists the dynamically linked dependencies of other programs) on the V compiler and a hello world program:

$ ldd ./v
        linux-vdso.so.1 (0x00007fff2d044000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f2fb3e4c000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f2fb3a5b000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f2fb4345000)
$ ldd ./hello
        linux-vdso.so.1 (0x00007ffdfdff2000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fed25771000)
        /lib64/ld-linux-x86-64.so.2 (0x00007fed25d88000)

If these binaries were really as dependency-free as the V website claims, the output of ldd would look something like this:

$ ldd $HOME/bin/dhall
        not a dynamic executable

The V compiler claims to have support for generating machine code directly, but in my testing I was unable to figure out how to set the compiler into this mode.

Memory Management

V doesn't use garbage collection or reference counting. The compiler cleans everything up during compilation. If your V program compiles, it's guaranteed that it's going to be leak free.

Accordingly, the documentation still claims that memory management is both a work in progress and has (or will have, it's not clear which is accurate from the documentation alone) perfect accuracy for cleaning up things at compile time. Every one of these posts I have run a benchmark against the V compiler, I like to call it the "how much ram do you leak compiling hello world" test. Last it leaked 4,600,383 bytes (or about 4.6 megabytes) and before that it leaked 3,861,785 bytes (or about 3.9 megabytes). This time:

$ valgrind ./v hello.v
==5413== Memcheck, a memory error detector
==5413== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==5413== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright info
==5413== Command: ./v hello.v
==5413== HEAP SUMMARY:
==5413==     in use at exit: 7,232,779 bytes in 163,690 blocks
==5413==   total heap usage: 182,696 allocs, 19,006 frees, 11,309,504 bytes allocated
==5413== LEAK SUMMARY:
==5413==    definitely lost: 2,673,351 bytes in 85,739 blocks
==5413==    indirectly lost: 4,265,809 bytes in 77,711 blocks
==5413==      possibly lost: 256,000 bytes in 1 blocks
==5413==    still reachable: 37,619 bytes in 239 blocks
==5413==         suppressed: 0 bytes in 0 blocks
==5413== Rerun with --leak-check=full to see details of leaked memory
==5413== For counts of detected and suppressed errors, rerun with: -v
==5413== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)

It seems that the memory managment really is a work in progress. This increase in leakage means that the compiler building itself now creates 7,232,779 bytes of leaked ram (which still is amusingly its install size in memory, when including git deltas, temporary files and a worktree copy of V).


The Doom translation project still has one file translated (and apparently it breaks sound effects but not music). I have been looking forward to the full release of this as it will show a lot about how readable the output of V's C to V translation feature is.

1.2 Million Lines of Code

Let's re-run the artificial as heck 1.2 million lines of code benchmark from the last post:

$ bash -c 'time ~/code/v/v main.v'

real    7m54.847s
user    7m32.860s
sys     0m14.212s

Compared to the last time this benchmark was run, this took 2 minutes less (last time it took about 10 minutes). This is actually a major improvement, and means that V's claims of speed are that much closer to reality at least on my test hardware.


A common problem that shows up when writing multi-threaded code are race conditions. Effectively, race conditions are when two bits of code try to do the same thing at the same time on the same block of memory. This leads to undefined behavior, which is bad because it can corrupt or crash programs.

As an example, consider this program raceanint.v:

fn main() {
  foo := [ 1 ]
  go add(mut foo)
  go add(mut foo)

  for {}

fn add(mut foo []int) {
  for {
    foo[0] = foo[0] + 1

In theory, this should have two threads infinitely trying to increment foo[0], which will eventually result in foo[0] getting corrupted by two threads trying to do the same thing at the same time (given the tight loops invovled). This leads to undefined behavior, which can be catastrophic in production facing applications.

However, I can't get this to build:

/home/cadey/.cache/v/raceanint.tmp.c: In function ‘add_thread_wrapper’:
/home/cadey/.cache/v/raceanint.tmp.c:1209:6: error: incompatible type for argument 1 of ‘add’
/home/cadey/.cache/v/raceanint.tmp.c:1198:13: note: expected ‘array_int * {aka struct array *}’ but argument is of type ‘array_int {aka struct array}’
 static void add(array_int* foo);
/home/cadey/.cache/v/raceanint.tmp.c: In function ‘strconv__v_sprintf’:
/home/cadey/.cache/v/raceanint.tmp.c:3611:7: warning: variable ‘th_separator’ set but not used [-Wunused-but-set-variable]
  bool th_separator = false;
/home/cadey/.cache/v/raceanint.tmp.c: In function ‘print_backtrace_skipping_top_frames_linux’:
(Use `v -cg` to print the entire error message)

builder error:
C error. This should never happen.

If you were not working with C interop, please raise an issue on GitHub:


Like I said before, I also cannot file new issues about this. So if you are willing to help me out, please open an issue about this.

EDIT(Xe): 2020 M06 23

I do not plan to make any future update posts about the V programming language in the future. The V community is something I would really rather not be associated with. This is an edited-down version of the post that was released last week (2020 M06 17).

As of the time of writing this note to the end of this post and as far as I am aware, I am banned from being able to contribute to the V language in any form. I am therefore forced to consider that the V project will respond to criticism of their language with bans. This subjective view of reality may not be accurate to what others see.

I would like to see this situation result in a net improvement for everyone involved. V is an interesting take on a stagnant field of computer science, but I cannot continue to comment on this language or give it any of the signal boost I have given it with this series of posts.

Thank you for reading. I will continue with my normal posts in the next few days.

Be well.

Fairly Odd Orca

Permalink - Posted on 2020-06-15 00:00

Made in Drawpile with an iPad Pro connected as a display to a MacBook. I can upload the .ora file on request.

Why I Use Suckless Tools

Permalink - Posted on 2020-06-05 00:00

Why I Use Suckless Tools

Software is complicated. Foundational building blocks of desktop environments tend to grow year over year until it's difficult to understand or maintain them. Suckless offers an alternative to this continuous cycle of bloat and meaningless redesign. Suckless tools aim to keep things simple, minimal, usable and hackable by default. Their window manager dwm is just a window manager. It doesn't handle things like transparency, compositing or volume control. Their terminal st is just a terminal. It doesn't handle fancy things like ancient terminal kinds that died out long ago. It just displays text. It doesn't handle things that tmux or similar could take care of, because tmux can do a better job at that than st ever could on its own.

Suckless tools are typically configured in C, the language they are written in. However as a side effect of suckless tools having their configuration baked into the executable at compile time, they start up instantly. If something goes wrong while using them, you can easily jump right into the code that implements them and nail down issues using basic debugger skills.

However, even though the window manager is meager, it still offers places for you to make it look beautiful. For examples of beautiful dwm setups, see this search of /r/unixporn on reddit.

I would like to walk through my dwm setup, how I have it configured all of the parts at play as well as an example of how I debug problems in my dwm config.

My dwm Config

As dwm is configured in C, there's also a community of people creating patches for dwm that add extra features like additional tiling methods, the ability to automatically start things with dwm, transparency for the statusbar and so much more. I use the following patches:

This combination of patches allows me to make things feel comfortable and predictable enough that I can rely entirely on muscle memory for most of my window management. Nearly all of it is done with the keyboard too.

Here is my config file. It's logically broken into two big sections:

  • Variables
  • Keybinds

I'll go into more detail about these below.


The main variables in my config control the following:

  • border width
  • size of the gaps when tiling windows
  • the snap width
  • system tray errata
  • the location of the bar
  • the fonts
  • colors
  • transparency values for the bar
  • workspace names (mine are based off of the unicode emoticon (ノ◕ヮ◕)ノ*:・゚✧)
  • app-specific hacks
  • default settings for the tiling layouts
  • if windows should be forced into place or not
  • window layouts

All of these things control various errata. As a side effect of making them all compile time constants, these settings don't have to be loaded into the program because they're already a part of it. I use the Hack font on my desktop and with emacs.


The real magic of tiling window managers is that all of the window management commands are done with my keyboard. Alt is the key I have devoted to controlling the window manager. All of my window manager control chords use the alt key.

Here are the main commands and what they do:

Command Effect
Alt-p Spawn a program by name
Alt-Shift-Enter Open a new terminal window
Alt-b Hide the bar if it is shown, show the bar if it is hidden
Alt-j Move focus down the stack of windows
Alt-k Move focus up the stack of windows
Alt-i Increase the number of windows in the primary area
Alt-d Decrease the number of windows in the primary area
Alt-h Make the primary area smaller by 5%
Alt-l Make the primary area larger by 5%
Alt-Enter Move the currently active window into the primary area
Alt-Tab Switch to the most recently active workspace
Alt-Shift-C Nicely ask a window to close
Alt-t Select normal tiling mode for the current workspace
Alt-f Select floating (non-tiling) mode for the current workspace
Alt-m Select monocle (fullscreen active window) mode for the current workspace
Alt-u Select bottom-stacked tiling mode for the current workspace
Alt-o Select bottom-stacked horizontal tiling mode for the current workspace (useful on vertical monitors)
Alt-e Open a new emacs window
Alt-Space Switch to the most recently used tiling method
Alt-Shift-Space Detach the currently active window from tiling
Alt-1 thru Alt-9 Switch to a given workspace
Alt-Shift-1 thru Alt-Shift-9 Move the active window to a given workspace
Alt-0 Show all windows on all workspaces
Alt-Shift-0 Show the active window on all workspaces
Alt-Comma and Alt-Period Move focus to the other monitor
Alt-Shift-Comma and Alt-Shift-Period Move the active window to the other monitor
Alt-Shift-q Uncleanly exit dwm and kill the session

This is just enough commands that I can get things done, but not so many that I get overwhelmed and forget what keybind does what. I have most of this committed to muscle memory (and had to look at the config file to write out this table), and as a result nearly all of my window management is done with my keyboard.

The rest of my config handles things like Alt-Right-Click to resize windows arbitrarily, signals with dwmc and other overhead like that.

The Other Parts

The rest of my desktop environment is built up using a few other tools that build on top of dwm. You can see the NixOS modules I've made for it here and here:

  • xrandr to set up my multiple monitors and rotation for them
  • feh to set my wallpaper
  • picom to handle compositing effects like transparency, blur and drop shadows for windows
  • pasystray for controlling my system volume
  • dunst for notifications
  • xmodmap for rebinding the caps lock key to the escape key
  • cabytcini to show the current time and weather in my dwm bar

Each of these tools has their own place in the stack and they all work together to give me a coherent and cohesive environment that I can use for Netflix, programming, playing Steam games and more.

cabytcini is a program I created for myself as part of my goal to get more familiar with Rust. As of the time of this post being written, it uses only 11 megabytes of ram and is configured using a config file located at ~/.config/cabytcini/gaftercu'a.toml. It scrapes data from the API server I use for my wall-mounted clock to show me the weather in Montreal. I've been meaning to write more about it, but it's currently only documented in Lojban.

Debugging dwm

Software is imperfect, even smaller programs like dwm can still have bugs in them. Here's the story of how I debugged and bisected a problem with my dwm config recently.

I had just gotten the second monitor set up and noticed that whenever I sent a window to it, the entire window manager seemed to get locked up. I tried sending the quit command to see if it would respond to that, and it failed. I opened up a virtual terminal with control-alt-F1 and logged in there, then I launched htop to see if the process was blocked.

It reported dwm was using 100% CPU. This was odd. I then decided to break out the debugger and see what was going on. I attached to the dwm process with gdb -p (pgrep dwm) and then ran bt full to see where it was stuck.

The backtrace revealed it was stuck in the drawbar() function. It was stuck in a loop that looked something like this:

for (c = m->clients; c; c = c->next) {
    occ |= c->tags;
    if (c->isurgent)
            urg |= c->tags;

dwm stores the list of clients per tag in a singly linked list, so the root cause could be related to a circular linked list somehow, right?

I decided to check this by printing c and c->next in GDB to see what was going on:

gdb> print c
gdb> print c->next

The linked list was circular. dwm was stuck iterating an infinite loop. I looked at the type of c and saw it was something like this:

struct Client {
	char name[256];
	float mina, maxa;
	float cfact;
	int x, y, w, h;
	int oldx, oldy, oldw, oldh;
	int basew, baseh, incw, inch, maxw, maxh, minw, minh;
	int bw, oldbw;
	unsigned int tags;
	int isfixed, isfloating, isurgent, neverfocus, oldstate, isfullscreen;
	Client *next;
	Client *snext;
	Monitor *mon;
	Window win;

So, next is a pointer to the next client (if it exists). Setting the pointer to NULL would probably break dwm out of the infinite loop. So I decided to test that by running:

gdb> set var c->next = 0x0

To set the next pointer to null. dwm immediately got unstuck and exited (apparently my quit command from earlier got buffered), causing the login screen to show up. I was able to conclude that something was wrong with my dwm setup.

I know this behavior worked on release versions of dwm, so I decided to load up KDE and then take a look at what was going on with Xephyr and git bisect.

I created two fake monitors with Xephyr:

$ Xephyr -br -ac -noreset -screen 800x600 -screen 800x600 +xinerama :1 &

And then started to git bisect my dwm fork:

$ cd ~/code/cadey/dwm
$ git bisect init
$ git bisect bad HEAD
$ git bisect good cb3f58ad06993f7ef3a7d8f61468012e2b786cab

I registered the bad commit (the current one) and the last known good commit (from when dwm 6.2 was released) and started to recreate the conditions of the hang.

I set the DISPLAY environment variable so that dwm would use the fake monitors:

$ export DISPLAY=:1

and then rebuilt/ran dwm:

$ make clean && rm config.h && make && ./dwm

Once I had dwm up and running, I created a terminal window and tried to send it to the other screen. If it worked, I marked the commit as good with git bisect good, and if it hung I marked the commit as bad with git bisect bad. 7 iterations later and I found out that the attachbelow patch was the culprit.

I reverted the patch on the master branch, rebuilt and re-ran dwm and tried to send the terminal window between the fake monitors. It worked every time. Then I committed the revert of attachbelow, pushed it to my NUR repo, and then rebuilt my tower's config once it passed CI.

Being a good internet citizen, I reported this to the suckless mailing list and then was able to get a reply back not only confirming the bug, but also with a patch for the patch to fix the behavior forever. I have yet to integrate this meta-patch into my dwm fork, but I'll probably get around to it someday.

This really demonstrates one of the core tenets of the suckless philosophy perfectly. I am not very familiar with how the dwm codebase works, but I am able to dig into its guts and diagnose/fix things because it is intentionally kept as simple as possible.

If you use Linux on a desktop/laptop, I highly suggest taking a look at suckless software and experimenting with it. It is super optimized for understandability and hacking, which is a huge breath of fresh air these days.

gitea-release Tool Announcement

Permalink - Posted on 2020-05-31 00:00

gitea-release Tool Announcement

I'm a big fan of automating things that can possibly be automated. One of the biggest pains that I've consistently had is creating/tagging releases of software. This has been a very manual process for me. I have to write up changelogs, bump versions and then replicate the changelog/versions in the web UI of whatever git forge the project in question is using. This works great at smaller scales, but can quickly become a huge pain in the butt when this needs to be done more often. Today I've written a small tool to help me automate this going forward, it is named gitea-release. This is one of my largest Rust projects to date and something I am incredibly happy with. I will be using it going forward for all of my repos on my gitea instance tulpa.dev.

gitea-release is a spiritual clone of the tool github-release, but optimized for my workflow. The biggest changes are that it works on gitea repos instead of github repos, is written in Rust instead of Go and it automatically scrapes release notes from CHANGELOG.md as well as reading the version of the software from VERSION.


The CHANGELOG.md file is based on the Keep a Changelog format, but modified slightly to make it easier for this tool. Here is an example changelog that this tool accepts:

# Changelog
All notable changes to this project will be documented in this file.

The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## 0.1.0


- Refrobnicate the spurious rilkefs

## 0.0.1

First release, proof of concept.

When a release is created for version 0.1.0, this tool will make the description of the release about as follows:


- Refrobnicate the spurious rilkefs

This allows the changelog file to be the ultimate source of truth for release notes with this tool.

The VERSION file plays into this as well. The VERSION file MUST be a single line containing a semantic version string. This allows the VERSION file to be the ultimate source of truth for software version data with this tool.

Release Process

When this tool is run with the release subcommand, the following actions take place:

  • The VERSION file is read and loaded as the desired tag for the repo
  • The CHANGELOG.md file is read and the changes for the VERSION are cherry-picked out of the file
  • The git repo is checked to see if that tag already exists
    • If the tag exists, the tool exits and does nothing
  • If the tag does not exist, it is created (with the changelog fragment as the body of the tag) and pushed to the gitea server using the supplied gitea token
  • A gitea release is created using the changelog fragment and the release name is generated from the VERSION string

Automation of the Automation

This tool works perfectly well locally, but this doesn't make it fully automated from the gitea repo. I use drone as a CI/CD tool for my gitea repos. Drone has a very convenient and simple to use plugin system that was easy to integrate with structopt.

I created a drone plugin at xena/gitea-release that can be configured as a pipeline step in your .drone.yml like this:

kind: pipeline
name: ci/release
  - name: whatever unit testing step
    # ...
  - name: auto-release
    image: xena/gitea-release:0.2.5
      auth_username: cadey
      changelog_path: ./CHANGELOG.md
      gitea_server: https://tulpa.dev
        from_secret: GITEA_TOKEN
        - push
        - master

This allows me to bump the VERSION and CHANGELOG.md, then push that commit to git and a new release will automatically be created. You can see an example of this in action with the drone build history of the gitea-release repo. You can also how the CHANGELOG.md file grows with the CHANGELOG of gitea-release.

Once the release is pushed to gitea, you can then use drone to trigger deployment commands. For example here is the deployment pipeline used to automatically update the docker image for the gitea-release tool:

kind: pipeline
name: docker
  - name: build docker image
    image: "monacoremo/nix:2020-04-05-05f09348-circleci"
      USER: root
      - cachix use xe
      - nix-build docker.nix
      - cp $(readlink result) /result/docker.tgz
      - name: image
        path: /result
        - tag

  - name: push docker image
    image: docker:dind
      - name: image
        path: /result
      - name: dockersock
        path: /var/run/docker.sock
      - docker load -i /result/docker.tgz
      - echo $DOCKER_PASSWORD | docker login -u $DOCKER_USERNAME --password-stdin
      - docker push xena/gitea-release
        from_secret: DOCKER_USERNAME
        from_secret: DOCKER_PASSWORD
        - tag

  - name: image
    temp: {}
  - name: dockersock
      path: /var/run/docker.sock

This pipeline will use Nix to build the docker image, load it into a Docker daemon and then log into the Docker Hub and push it. This can then be used to do whatever you want. It may also be a good idea to push a docker image for every commit and then re-label the tagged commits, but this wasn't implemented in this repo.

I hope this tool will be useful. I will accept feedback over any contact method. If you want to contribute directly to the project, please feel free to create issues or pull requests. If you don't want to create an account on my git server, get me the issue details or code diffs somehow and I will do everything I can to fix issues and integrate code. I just want to make this tool better however I can.

Be well.

ReConLangMo 8: Storytelling

Permalink - Posted on 2020-05-29 00:00

ReConLangMo 8: Storytelling

In the last episode of ReConLangMo, we covered conversational discourse as well as formality and other grammatical moods. I also covered my goals for the gender system of L'ewa. Here I will cover the closest thing L'ewa has to culture, the storytelling and poetry norms. L'ewa is also a language designed for spellcraft and sigil magick, so those norms will be covered too. This is a response to this prompt.


Stories are told as statements that happened in the past. Stories are structured in the same way that you would structure them in English. There is a scenario, a call to action, a refusal of the call, then the story goes on in the standard way. Casual retelling of events is done without a narrative, and the events are just relayed using casual sentences.

The particle qu can be repeated at the beginning of a story to enable the "story time" flag. Story time sentences can be figurative. Each sentence in the story progressively builds up the narrative to explain the themes and lessons that are trying to be conveyed.

Stories are told using the narrative present tense. Speakers also relay secondhand information directly.


One of the morphological side effect of L'ewa root words is that only the first four letters of each word are unique. As a side effect of this, you can make any word rhyme with any other word if you want it to. L'ewa can become l'ewi, l'ewo, le'we or l'ewu if the poetry demands it. Poetry can be done in any meter or rhythm depending on the mood of the speaker. Poetry can also be formatted using fixed-width text. Here are a few examples:

le l'ewa de kirta
xi firga to renma

The language of Creators
is beneficial to all people
a'o ro zimpu ti
e'o so vorto

I hope you understand this
How many words?

I don't think I have enough vocabulary to make any more yet.


These poems are worked into sigils by interlocking the words together into a larger figure. Here is an example based on the first poem:

Ideally this would would include the letters spiraling around things, but my current tools are limited in what they can do. Sigils don't need to follow normal grammar rules. They can bend and break them as much as they want or need in order to flow nicer. If they need to, they can also make up words that don't normally exist in the dictionary. These words should be documented in the dictionary at some point, but there is no big rush.

Gender and Third Person Pronouns

Previously, I haven't gone into details about the third person pronouns in L'ewa. This was done very, very intentionally. One of the goals of L'ewa's handling of gender is to abolish the gender binary as much as possible. This means that any content word can end up being used as a pronoun. In order to avoid ambiguity, only part of the content word is used to form something matching the particle rules I was vaguely gesturing about in the post that had details about colors. To recap:

Compound words still need to be fleshed out, but generally all CVCCV words will have wordparts made out of the first, second and fifth letter, unless the vowel pair is illegal and all CCVCV words are the first, third and fifth letter unless this otherwise violates the morphology rules.

Let's say that your gender is the word for "is meat", or dextu. This would mean the third person pronoun form would be de'u (eu isn't a valid vowel pair so the glottal stop is used to break it).

de'u qu tulpa lo l'ewa
They (de'u) built a language.

mao qu madsa lo spalo
They (mao) ate an apple.

If you want to declare your gender, you can declare it with the word zedra:

lo spalo xi zedra
An apple is (my) gender.

This would then make their pronoun sao.

You can ask someone what their gender is with the gender question particle zei:

<Mai> xoi
<Cadey> xoi ro zei
<Mai> lo mlato xi zedra
<Mai> zei
<Cadey> lo 'orka xi zedra

From then Mai would be referred to using the pronoun mao and Cadey would be referred to using the pronoun 'ka.

If you need a generic third person pronoun, use ke'o.

This seems to be the end of the ReConLangMo series in /r/conlangs, but I will definitely continue to develop this on my own and post about it as I make larger accomplishments. This has been a fun series and I hope it gave people a high level overview of what is needed to make a speakable language from nothing.

Be well.

ReConLangMo 7: Discourse

Permalink - Posted on 2020-05-25 00:00

ReConLangMo 7: Discourse

Previously on ReConLangMo, we covered a lot of new words for the lexicon of L'ewa. This helps to flesh out a lot of what can be said, but conversations themselves can be entirely different from formal sentences. Conversations flow and ebb based on the needs/wants of the interlocutors. This post will start to cover a lot of the softer skills behind L'ewa as well as cover some other changes I'm making under the hood. This is a response to this prompt.

Information Structure

L'ewa doesn't have any particular structure for marking previously known information, as normal sentences should suffice in most cases. Consider this paragraph:

I saw you eat an apple. Was it tasty?

Since an apple was the last thing mentioned in the paragraph, the vague "it" pronoun in the second sentence can be interpreted as "the apple".

L'ewa doesn't have a way to mark the topic of a sentence, that should be obvious from context (additional clauses to describe things will help here). In most cases the subject should be equivalent to the topic of a sentence.

L'ewa doesn't directly offer ways to emphasize parts of sentences with phonemic stress like English does (eg: "I THOUGHT you ate an apple" vs "I thought you ATE an apple"), but emotion words can be used to help indicate feelings about things, which should suffice as far as emphasis goes.

Discourse Structure

Conversationally, a lot of things in L'ewa grammar get dropped unless it's ambiguous. The I/yous that get tacked on in English are completely unneeded. A completely valid conversation could look something like this:

<Mai> xoi
<Cadey> xoi
<Mai> xoi madsa?
<Cadey> lo spalo

And it would roughly equate to:

<Mai> Hi
<Cadey> Hi, you doing okay?
<Mai> Yes, have you eaten?
<Cadey> Yes, I ate an apple

People know when they can speak after a sufficient pause between utterances. Interrupting is not common but not a social faux-pas, and can be used to stop a false assumption from being said.


An utterance in L'ewa is anything from a single content word all the way up to an entire paragraph of sentences. An emotion particle can be a complete utterance. A question particle can be a complete utterance, anything can be an utterance. A speaker may want to choose more succinct options when the other detail is already contextually known or simply not relevant to the listener.

L'ewa has a few discourse particles, here are a few of the more significant ones:

L'ewa Function
xi signals that the verb of the sentence is coming next
ko ends a noun phrase
ka marks something as the subject of the sentence
ke marks something as the verb of the sentence
ku marks something as the object of the sentence


The informal dialect of L'ewa drops everything it can. The formal dialect retains everything it can, to the point where it includes noun phrase endings, the verb signaler, ka/ke/ku and every single optional particle in the language. The formal dialect will end up sounding rather wordy compared to informal slangy speech. Consider the differences between informal and formal versions of "I eat an apple":

mi madsa lo spalo.
ka mi ko xi ke madsa ku lo spalo ko.

Nearly all of those particles are not required in informal speech (you could even get away with madsa lo spalo depending on context), but are required in formal speech to ensure there is as little contextual confusion as possible. Things like laws or legal rulings would be written out in the formal register.

Greetings and Farewell

"Hello" in L'ewa is said using xoi. It can also be used as a reply to hello similar to «ça va» in French. It is possible to have an entire conversation with just xoi:

<Mai> xoi
<Cadey> xoi
<Mai> xoi

The other implications of xoi are "how are you?" "I am good, you?", "I am good", etc. If more detail is needed beyond this, then it can be supplied instead of replying with xoi.

"Goodbye" is said using xei. Like xoi it can be used as a reply to another goodbye and can form a mini-conversation:

<Cadey> xei
<Mai> xei
<Cadey> xei

Emotion Words

Feelings in L'ewa are marked with a family of particles called "UI". These can also be modified with other particles. Here are the emotional markers:

L'ewa English
a'a attentive
a'e alertness
ai intent
a'i effort
a'o hope
au desire
a'u interest
e'a permission
e'e competence
ei obligation
e'i constraint
e'o request
e'u suggestion
ia belief
i'a acceptance
ie agreement
i'e approval
ii fear
i'i togetherness
io respect
i'o appreciation
iu love
i'u familiarity
o'a pride
o'e closeness
oi complaint/pain
o'i caution
o'o patience
o'u relaxation
ua discovery
u'a gain
ue surprise
u'e wonder
ui happiness
u'i amusement
uo completion
u'o courage
uu pity
u'u repentant

If an emotion is unknown in a conversation, you can ask with kei:

<Mai> xoi, so kei?
      hi,  what-verb what-feeling?

<Cadey> madsa ui
        eating :D

This system is wholesale stolen from Lojban.


Connectives exist to link noun phrases and verbs together into larger noun phrases and verbs. They can also be used to link together sentences. There are four simple connectives: fa (OR), fe (AND), fi (connective question), fo (if-and-only-if) and fu (whether-or-not).


ro au madsa lo spalo fa lo hafto?
Do you want to eat an apple or an egg?


ro au madsa lo spalo fe lo hafto?
Do you want to eat an apple and an egg?

If and Only If

ro 'amwo mi fo mi madsa hafto?
Do you love me if I eat eggs?

Whether or Not

mi 'amwo ro. fu ro madsa hafto.
I love you, whether or not you eat eggs.

Connective Question

ro au madsa lo spalo fi lo hafto?
Do you want to eat apples and/or eggs?

Changes Being Made to L'ewa

Early on, I mentioned that family terms were gendered. This also ended up with me making some gendered terms for people. I have since refactored out all of the gendered terms in favor of more universal terms. Here is a table of some of the terms that have been replaced:

English L'ewa term L'ewa word
brother/sister sibling xinga
mother/father parent pa'ma
grandfather/grandmother grandparent gra'u
aunt/uncle parent pa'ma
cousin sibling xinga
man/woman Creator kirta
man/woman human renma

In some senses, gender exists. In other senses, gender does not. With L'ewa I want to explore what is possible with language. It would be interesting to create a language where gender can be discussed as it is, not as the categories that it has historically fit into. Consider colors. There are millions of colors, all sightly different but many follow general patterns. No one or two colors can be thought of as the "default" color, yet we can have long and meaningful conversations about what color is and what separates colors from eachother.

I aim to have the same kind of granularity in L'ewa. As a goal of the language, I should be able to point to any content word in the dictionary and be able to say "that's my gender" in the same way I can describe color or music with that tree. These will implicitly be metaphors (which does detract a bit from the logical stance L'ewa normally takes) because gender is almost always a metaphor in practice. L'ewa will not have binary gender.

Issue number two on the L'ewa repo will help track the creation and implementation of a truly non-binary "gender" system for L'ewa.

I've been chugging through the Swaedish list more and more to build up more of L'ewa's vocabulary in preparation for starting to translate sentences more complicated than simple "I eat an apple" or "Do you like eating plants?". One of the first things I want to translate is the classic tower of babel story.

Be well.

maybedoer: the Maybe Monoid for Go

Permalink - Posted on 2020-05-23 00:00

maybedoer: the Maybe Monoid for Go

I recently posted (a variant of) this image of some Go source code to Twitter and it spawned some interesting conversations about what it does, how it works and why it needs to exist in the first place:

the source code of package maybedoer

This file is used to sequence functions that could fail together, allowing you to avoid doing an if err != nil check on every single fallible function call. There are two major usage patterns for it.

The first one is the imperative pattern, where you call it like this:

md := new(maybedoer.Impl)

var data []byte

md.Maybe(func(context.Context) error {
  var err error
  data, err = ioutil.ReadFile("/proc/cpuinfo")
  return err

// add a few more maybe calls?

if err := md.Error(); err != nil {
  ln.Error(ctx, err, ln.Fmt("cannot munge data in /proc/cpuinfo"))

The second one is the iterative pattern, where you call it like this:

func gitPush(repoPath, branch, to string) maybedoer.Doer {
  return func(ctx context.Context) error {
    // the repoPath, branch and to variables are available here
    return nil

func repush(ctx context.Context) error {
  repoPath, err := ioutil.TempDir("", "")
  if err != nil {
    return fmt.Errorf("error making checkout: %v", err)

  md := maybedoer.Impl{
    Doers: []maybedoer.Doer{
      gitConfig, // assume this is implemented
      gitClone(repoPath, os.Getenv("HEROKU_APP_GIT_REPO")), // and this too
      gitPush(repoPath, "master", os.Getenv("HEROKU_GIT_REMOTE")),
  err = md.Do(ctx)
  if err != nil {
    return fmt.Errorf("error repushing Heroku app: %v", err)
  return nil

Both of these ways allow you to sequence fallible actions without having to write if err != nil after each of them, making this easily scale out to arbitrary numbers of steps. The design of this is inspired by a package used at a previous job where we used it to handle a lot of fiddly fallible actions that need to happen one after the other.

However, this version differs because of the Doers element of maybedoer.Impl. This allows you to specify an entire process of steps as long as those steps don't return any values. This is very similar to how Haskell's Data.Monoid.First type works, except in Go this is locked to the error type (due to the language not letting you describe things as precisely as you would need to get an analog to Data.Monoid.First). This is also similar to Rust's and_then combinator.

If we could return values from these functions, this would make maybedoer closer to being a monad in the Haskell sense. However we can't so we are locked to one specific instance of a monoid. I would love to use this for a pointer (or pointer-like) reference to any particular bit of data, but interface{} doesn't allow this because interface{} matches literally everything:

var foo = []interface{
  "hi there",
  errors.New("this works too!"),

This could mean that if we changed the type of a Doer to be:

type Doer func(context.Context) interface{}

Then it would be difficult to know how to handle returns from the function. Arguably we could write some mechanism to check if it is an error:

result := do(ctx)
if result != nil {
  switch result.(type) {
  case error:
    return result // result is of type error magically
    md.return = result

But then it would be difficult to know how to pipe the result into the next function, unless we change Doer's type to be:

type Doer func(context.Context, interface{}) interface{}

Which would require code that looks like this:

func getNumber(ctx context.Context, _ interface{}) interface{} {
  return 2

func double(ctx context.Context, num interface{}) interface{} {
  switch num.(type) {
  case int:
    return 2+2
    return fmt.Errorf("wanted num to be an int, got: %T", num)
  return nil

But this kind of repetition would be required for every function. I don't really know what the best way to solve this in a generic way would be, but I'm fairly sure that these fundamental limitations in Go prevent this package from being genericized to handle function outputs and inputs beyond what you can do with currying (and maybe clever pointer usage).

I would love to be proven wrong though. If anyone can take this source code under the MIT license and prove me wrong, I will stand corrected and update this blogpost with the solution.

This kind of thing is more easy to solve in Rust with its Result type; and arguably this entire problem solved in the Go package is irrelevant in Rust because this solution is in the standard library of Rust.

ReConLangMo 6: Lexicon

Permalink - Posted on 2020-05-22 00:00

ReConLangMo 6: Lexicon

Previously in this series, we've covered a lot of details about how sentences work, tenses get marked and how words work in general; however this doesn't really make L'ewa a language. Most of the difficulty in making a language like this is the vocabulary. In this post I'll be describing how I am making the vocabulary for L'ewa and I'll include an entire table of the dictionary words. This answers this prompt.

Word Distinctions

L'ewa is intended to be a logical language. One of the side effects of L'ewa being a logical language is that each word should have as minimal and exact of a meaning/function as possible. English has lots of words that cover large semantic spaces (like go, set, run, take, get, turn, good, etc.) without much of a pattern to it. I don't want this in L'ewa.

Let's take the word "good" as an example. Off the top of my head, good can mean any of the following things:

  • beneficial
  • aesthetically pleasing
  • favorful taste
  • saintly (coincidentally this is the source of the idiom "God is good")
  • healthy

I'm fairly sure there are more "senses" of the word good, but let's break these into their own words:

L'ewa Definition
firgu is beneficial/nice to
n'ixu is aesthetically pleasing to
flawo is tasty/has a pleasant flavor to
spiro is saintly/holy/morally good to
qanro is healthy/fit/well/in good health

Each of these words has a very distinct and fine-grained meaning, even though the range is a bit larger than it would be in English. These words also differ from a lot of the other words in the L'ewa dictionary so far because they can take an object. Most of the words so far are adjective-like because it doesn't make sense for there to be an object attached to the color blue.

By default, if a word that can take an object doesn't have one, it's assumed to be obvious from context. For example, consider the following set of sentences:

mi qa madsa lo spalo. ti flawo!

I am eating an apple. It's delicious!

I am working at creating more words using a Swaedish list.

Family Words

Family words are a huge part of a language because it encodes a lot about the culture behind that language. L'ewa isn't really intended to have much of a culture behind it, but the one place I want to take a cultural stance is here. The major kinship word is kirta, or "is an infinite slice of an even greater infinite". This is one of the few literal words in L'ewa that is defined using a metaphor, as there is really no good analog for this in English.

There are also words for other major family terms in English:

L'ewa Definition
brota is the/a brother of
sistu is the/a sister of
mamta is the/a mother of
patfu is the/a father of
grafa is the/a grandfather of
grama is the/a grandmother of
wanto is the/a aunt of
tunke is the/a uncle of

Cousins are all called brother/sister. None of these words are inherently gendered and brota can refer to a female or nonbinary person. The words are separate because I feel it flows better, for now at least.


L'ewa strives to have as few idioms as possible. If something is meant non-literally (or as a conceptual metaphor), the particle ke'a can be used:

ti firgu
This is beneificial

ti ke'a firgu
This is metaphorically/non-literally beneficial

I have been documenting L'ewa and all of its words/grammar in a git repo. The layout of this repo is as follows:

Folder Purpose
book The source files and build scripts for the L'ewa book (this book may end up being published)
nix Nix crud, custom packages for the eBook render and development tools
script Where experiments for the written form of L'ewa live
tools Tools for hacking at L'ewa in Rust/Typescript (none published yet, this is where the dictionary server code will live)
words Where the definitions of each word are defined in Dhall, this will be fed into the dictionary server code

I also have the entire process of building and testing everything (from the eBook to the unit tests of the tools) automated with Drone. You can see the past builds here. After I merge the information from the latest blogpost into this repo, I will put a rendered version of it here. This will allow you to browse through the chapters of the eBook while it is being written. Eventually this will be automatically deployed to my Kubernetes cluster and the book will be a subpath/subdomain of lewa.christine.website.

I have created a system of defining words that allows you to focus on each word at once, but then fit it back into the greater whole of the language. For example here is kirta.dhall:

-- kirta.dhall
let ContentWord = ../types/ContentWord.dhall

in  ContentWord::{
    , word = "kirta"
    , gloss = "Creator"
    , definition =
        "is an infinite slice of an even greater infinite/our Creator/a Creator"

This is put in words/roots because it is a root (or uncombined) word. Then it is added to the dictionary.dhall:

-- dictionary.dhall
let ContentWord = ./types/ContentWord.dhall

let ParticleWord = ./types/ParticleWord.dhall

in  { rootWords =
      [ -- ...
      -- ...
    , particles [ -- ...

And then the build process will automatically generate the new dictionary from all of these definitions. Downside of this is that each new kind of word needs subtle adjustments to the build process of the dictionary and that removals/changes to lots of words requires a larger-scale refactor of the language, but I feel the tradeoff is worth the effort. I will undoubtedly end up creating a few tools to help with this.

I will keep working on additional vocabulary on my own, but here is the list of vocabulary that has been written up so far.

Be well.

How HTTP Requests Work

Permalink - Posted on 2020-05-19 00:00

How HTTP Requests Work

Reading this webpage is possible because of millions of hours of effort with tens of thousands of actors across thousands of companies. At some level it's a minor miracle that this all works at all. Here's a preview into the madness that goes into hitting enter on christine.website and this website being loaded.


The user types in https://christine.website into the address bar and hits enter on the keyboard. This sends a signal over USB to the computer and the kernel polls the USB controller for a new message. It's recognized as from the keyboard. The input is then sent to the browser through an input driver talking to a windowing server talking to the browser program.

The browser selects the memory region normally reserved for the address bar. The browser then parses this string as an RFC 3986 URI and scrapes out the protocol (https), hostname (christine.website) and path (/). The browser then uses this information to create an abstract HTTP request object with the Host header set to christine.website, HTTP method (GET), and path set to the path. This request object then passes through various layers of credential storage and middleware to add the appropriate cookies and other headers in order to tell my website what language it should localize the response to, what compression methods the browser understands, and what browser is being used to make the request.


The browser then checks if it has a connection to christine.website open already. If it does not, then it creates a new one. It creates a new connection by figuring out what the IP address of christine.website is using DNS. A DNS request is made over UDP on port 53 to the DNS server configured in the operating system (such as, or The UDP connection is created using operating system-dependent system calls and a DNS request is sent.

The packet that was created then is destined for the DNS server and added to the operating system's output queue. The operating system then looks in its routing table to see where the packet should go. If the packet matches a route, it is queued for output to the relevant network card. The network card layer then checks the ARP table to see what mac address the ethernet frame should be sent to. If the ARP table doesn't have a match, then an arp probe is broadcasted to every node on the local network. Then the driver waits for an arp response to be sent to it with the correct IP -> MAC address mapping. The driver then uses this information to send out the ethernet frame to the node that matches the IP address in the routing table. From there the packet is validated on the router it was sent to. It then unwraps the packet to the IP layer to figure out the destination network interface to use. If this router also does NAT termination, it creates an entry in the NAT table for future use for a site-configured amount of time (for UDP at least). It then passes the packet on to the correct node and this process is repeated until it gets to the remote DNS server.

The DNS server then unwraps the ethernet frame into an IP packet and then as a UDP packet and a DNS request. It checks its database for a match and if one is not found, it attempts to discover the correct name server to contact by using a NS record query to its upstreams or the authoritative name server for the WEBSITE namespace. This then creates another process of ethernet frames and UDP packets until it reaches the upstream DNS server which hopefully should reply with the correct address. Once the DNS server gets the information that is needed, it sends this back the results to the client as a wire-format DNS response.

UDP is unreliable by design, so this packet may or may not survive the entire round trip. It may take one or more retries for the DNS information to get to the remote server and back, but it usually works the first time. The response to this request is cached based on the time-to-live specified in the DNS response. The response also contains the IP address of christine.website.


The protocol used in the URL determines which TCP port the browser connects to. If it is http, it uses port 80. If it is https, it uses port 443. The user specified HTTPS, so port 443 on whatever IP address DNS returned is dialed using the operating system's network stack system calls. The TCP three-way handshake is started with that target IP address and port. The client sends a SYN packet, the server replies with a SYN ACK packet and the client replies with an ACK packet. This indicates that the entire TCP session is active and data can be transferred and read through it.

However, this data is UNENCRYPTED by default. Transport Layer Security is used to encrypt this data so prying eyes can't look into it. TLS has its own handshake too. The session is established by sending a TLS ClientHello packet with the domain name (christine.website), the list of ciphers the client supports, any application layer protocols the client supports (like HTTP/2) and the list of TLS versions that the client supports. This information is sent over the wire to the remote server using that entire long and complicated process that I spelled out for how DNS works, except a TCP session requires the other side to acknowledge when data is successfully received. The server on the other end replies with a ClientHelloResponse that contains a HTTPS certificate and the list of protocols and ciphers the server supports. Then they do an encryption session setup rain dance that I don't completely understand and the resulting channel is encrypted with cipher (or encrypted) text written and read from the wire and a session layer translates that cipher text to clear text for the other parts of the browser stack.

The browser then uses the information in the ClientHelloResponse to decide how to proceed from here.


If the browser notices the server supports HTTP/2 it sets up a HTTP/2 session (with a handshake that involves a few roundtrips like what I described for DNS) and creates a new stream for this request. The browser then formats the request as HTTP/2 wire format bytes (binary format) and writes it to the HTTP/2 stream, which writes it to the HTTP/2 framing layer, which writes it to the encryption layer, which writes it to the network socket and sends it over the internet.

If the browser notices the server DOES NOT support HTTP/2, it formats the request as HTTP/1.1 wire formatted bytes and writes it to the encryption layer, which writes it to the network socket and sends it over the internet using that complicated process I spelled out for DNS.

This then hits the remote load balancer which parses the client HTTP request and uses site-local configuration to select the best application server to handle the response. It then forwards the client's HTTP request to the correct server by creating a TCP session to that backend, writing the HTTP request and waiting for a response over that TCP session. Depending on site-local configuration there may be layers of encryption involved.

Application Server

Now, the request finally gets to the application server. This TCP session is accepted by the application server and the headers are read into memory. The path is read by the application server and the correct handler is chosen. The HTML for the front page of christine.website is rendered and written to the TCP session and travels to the load balancer, gets encrypted with TLS, the encrypted HTML gets sent back over the internet to your browser and then your browser decrypts it and starts to parse and display the website. The browser will run into places where it needs more resources (such as stylesheets or images), so it will make additional HTTP requests to the load balancer to grab those too.

The end result is that the user sees the website in all its glory. Given all these moving parts it's astounding that this works as reliably as it does. Each of the TCP, ARP and DNS requests also happen at each level of the stack. There are layers upon layers upon layers of interacting protocols and implementations.

This is why it is hard to reliably put a website on the internet. If there is a god, they are surely the one holding all these potentially unreliable systems together to make everything appear like it is working.

ReConLangMo 5: Sentence Structure

Permalink - Posted on 2020-05-18 00:00

ReConLangMo 5: Sentence Structure

The last post in this series was more of a grammar dump with few concrete examples or much details about things (mostly because of a lack of vocabulary to make examples with). I'll fix this in the future, but for now let's continue on with sentence structure goodness. This is a response to this prompt.

Independent Clause Structure

Most of the time L'ewa sentences have only one clause. This can be anything from a single verb to a subject, verb and object. However, sometimes more information is needed. Consider this sentence:

The dog which is blue is large.

This kind of a relative clause would be denoted using hoi, which would make the sentence roughly the following in L'ewa:

le wufra hoi blanu xi brado.

The particle xi is needed here in order to make it explicit that the subject noun-phrase has ended.

Similarly, an incidental relative clause is done with with joi:

le  wufra  joi              blanu    ke brado
the dog,   which by the way is blue,    is big.


There are a few ways to ask questions in L'ewa. They correlate to the different kinds of things that the speaker could want to know.


ma is the particle used to fill in a missing/unknown noun phrase. Consider these sentences:

ma   blanu?
what is blue?
ro  qa madsa   ma?
you are eating what?


no is the particle used to fill in a missing/unknown verb. Consider these sentences:

ro no?
How are you doing?
le wufra xi no?
The dog did what?


so is the particle used to ask questions about numbers, similar to the "how many" construct in English.

ro madsa so spalo?
You ate how many apples?
le so zasko xi qa'te glowa
How many plants grow quickly?

Color Words

L'ewa uses a RGB color system like computers. The basic colors are red, green and blue, with some other basic ones for convenience:

English L'ewa
blue blanu
red delja
green qalno
yellow yeplo
teal te'ra
pink hetlo
black xekri
white pu'ro
50% gray flego

Colors will be mixed by creating compound words between base colors. Compound words still need to be fleshed out, but generally all CVCCV words will have wordparts made out of the first, second and fifth letter, unless the vowel pair is illegal and all CCVCV words are the first, second and fifth letter unless this otherwise violates the morphology rules. Like I said though, this really needs to be fleshed out and this is only a preview for now.

For example a light green would be puoqa'o (pu'lo qalno, white-green).

I hit a snag while hacking at the tooling for making word creation and the like easier. I am still working on it, but most of my word creation is manual and requires me to keep a phonology information document up on my monitor while I sound things out. As part of writing this article I had to add the letters f and r to L'ewa for the word wufra.

I am documenting my work for this language here. This repo will build the grammar book PDF, website and eBook. This will also be the home of the word generation, similarity calculation, dictionary and (eventually) automatic translation tools. I am also documenting each of the words in the language in their own files that will feed into the grammar book generation. More on this when I have more of a coherent product!

ReConLangMo 4: Noun and Verb Morphology

Permalink - Posted on 2020-05-15 00:00

ReConLangMo 4: Noun and Verb Morphology

Last time on ReConLangMo I covered word order and some of the finer points about how sentences work. This time we are covering how nouns and verbs get modified (some languages call this conjugation or declension). This is a response to this prompt.

Other Noun Things

At a high level, noun-phrases can be marked for direct ownership or number. The general pattern is like this:

<article> [pronoun] [negation] [number] <verb>


Here's some of the pronouns:

English L'ewa
me, I mi
My system and I mi'a
you ro
we (all-inclusive) mi'o
your system and you ro'a
This (near me) ti
That (near you) ta
That (far away) tu


Numbers are in base six. Here are a few numerals:

Decimal Seximal L'ewa
0 0 zo
1 1 ja
2 2 he
3 3 xu
4 4 ho
5 5 qi
6 10 jazo
36 100 gau

Here are few non-numerals-but-technically-still-numbers-I-guess:

English L'ewa
all to
some ra'o
number-question so


As L'ewa is more of a logical language, it has several forms of negation. Here are a few:

English L'ewa
contradiction na
total scalar negation na'o
particle negation nai

na can be placed before the sentence's verb too:

ti na spalo
This is something other than an apple

Verb Forms

Verbs have one form in L'ewa. Aspects like tense or the perfective aspect are marked with particles. Here's a table of the common ones:

English L'ewa
past tense qu
present tense qa
future tense qo
perfective aspect qe


Modality is going to be expressed with emotion words. These words have not been assigned yet, but their grammar will be a lot looser than the normal L'ewa particle grammar. They will allow any two vowels in any combination that might otherwise make them not "legal" for particles.

  • VV (ii)
  • V'V (i'i)

Explicitly Ending Noun Phrases

In case it is otherwise confusing, ko can be used to end noun phrases grammatically.

I will probably be fleshing this out some more, but for now this is how all of this works.

ReConLangMo 3: Morphosyntactic Typology

Permalink - Posted on 2020-05-11 00:00

ReConLangMo 3: Morphosyntactic Typology

In the last post of this series, we covered the sounds and word patterns of L'ewa. This time we are covering morphosyntactic typology, or how words and sentences are formed out of root words, details about sentences, word order and those kinds of patterns. I'll split each of these into their own headings so it's a bit easier to grok. This is a response to this prompt.

Word Order

L'ewa is normally a Subject-Verb-Object (SVO) language like English. However, the word order of a sentence can be changed if it is important to specify some part of the sentence in particular.

I haven't completely finalized the particles for this, but I'd like to use ka to denote the subject, ke to denote the verb and ku to denote the object. For example if the input sentence is something like:

/mi/ /mad.sa/ /lo/ /spa.lo/
mi   madsa    lo   spalo
 I   eat      an   apple

You could emphasize the eating with:

/kɛ/ /mad.sa/ /ka/ /mi/ /lo/ /spa.lo/
[ke] madsa    ka   mi   lo   spalo
V    eat      S    I    an   apple

(the ke is in square brackets here because it is technically not required, but it can make more sense to be explicit in some cases)

or the apple with:

/ku/ /lo/ /spalo/ /kɛ/ /mad.sa/ /mi
ku   lo   spalo   ke   madsa    mi
O    an   apple   V    eat      I

L'ewa doesn't really have adjectives or adverbs in the normal indo-european sense, but it does have a way to analytically combine meanings together. For example if qa'te is the word for is fast/quick/rapid in rate, then saying you are quickly eating (or wolfing food down) would be something like:

/qaʔ.tɛ/          /mad.sa/
qa'te             madsa
is fast [kind of] eat

These are assumed to be metaphorical by default. It's not always clear what someone would mean by a fast kind of language (would they be referencing Speedtalk?)

L'ewa doesn't always require a subject or object if it can be figured out from context. You can just say "rain" instead of "it's raining". By default, the first word in a sentence without an article is the verb. The ka/ke/ku series needs to be used if the word order deviates from Subject-Verb-Object (it functions a lot like the selma'o FA from Lojban).

Morphological Typology

L'ewa is a analytic language. Every single word has only one form and particles are used to modify the meaning or significance of words. There are only two word classes: content and particles.


L'ewa is a nominative-accusative language. Other particles may be introduced in the future to help denote the relations that exist in other alignments, but I don't need them yet.

Word Classes

As said before, L'ewa only has two word classes, content (or verbs) and particles to modify the significance or relations between content. There is also a hard limit of two arguments per verb, which should help avoid the problems that Lojban has with its inconsistent usage of the x3, x4 and x5 places.

As the content words are all technically verbs, there is no real need for a copula. The ka/ke/ku series can also help to break out of other things that modify "noun-phrases" (when those things exist). There are also no nouns, adjectives or adverbs, because analytically combining words completely replaces the need for them.

Nouns and verbs do not inflect for numbers. If numbers are needed they can be provided, otherwise the default is to assume "one or more".


I am still working on the finer details of the conscript for L'ewa, but here is a sneak preview of the letter forms I am playing with (this image below might not render properly in light mode):

The letters in the L'ewa conscript

My inspirations for this script were zbalermorna, Hangul, Hanzi, Katakana, Greek, international computer symbols, traditional Japanese art and the International Phonetic Alphabet.

This script is very decorative, and is primarily intended to be used in spellcraft and other artistic uses. It will probably show up in my art from time to time, and will definitely show up in any experimental video production that I work on in the future. I will go into more detail about this in the future, but here is my prototype. Please do let me know what you think about it.

As a side note, the words madsa, spalo and qa'te are now official L'ewa words, I guess. The entire vocabulary of the language can now be listed below:

Content Words

L'ewa word IPA English
l'ewa /lʔ.ɛwa/ is a language
madsa /mad.sa/ eats/is eating
qa'te /qaʔ.tɛ/ is fast/quick/rapid in rate
zasko /ʒa.sko/ is a plant/is vegetation
spalo /spa.lo/ is an apple


L'ewa word IPA English
lo /lo/ a, an, indefinite article
le /lɛ/ the, definite article
ka /ka/ subject marker
ke /kɛ/ verb marker
ku /ku/ object marker
mi /mi/ the current speaker

Gamebridge: Fitting Square Pegs into Round Holes since 2020

Permalink - Posted on 2020-05-09 00:00

Gamebridge: Fitting Square Pegs into Round Holes since 2020

Recently I did a stream called Twitch Plays Super Mario 64. During that stream I both demonstrated and hacked on a tool I'm calling gamebridge. Gamebridge is a tool that lets you allow games to interoperate with programs they really shouldn't be able to interoperate with.

Gamebridge works by aggressively hooking into a game's input logic (through a custom controller driver) and uses a pair of Unix fifos to communicate between it and the game it is controlling. Overall the flow of data between the two programs looks like this:

A diagram explaining how control/state/data flows between components of the gamebridge stack

You can view the source code of this diagram in GraphViz dot format here.

The main magic that keeps this glued together is the use of blocking I/O. This means that the bridge input thread will be blocked at the kernel level for the vblank signal to be written, and the game will also be blocked at the kernel level for the bridge input thread to write the desired input. This effectively uses the Linux kernel to pass around a scheduling quantum like you would in the L4 microkernel. This design consideration also means that gamebridge has to perform as fast as possible as much as possible, because it realistically only has a few hundred microseconds at best to respond with the input data to avoid humans noticing any stutter. As such, gamebridge is written in Rust.


When implementing gamebridge, I had a few goals in mind:

  • Use blocking I/O to have the kernel help with this
  • Use threads to their fullest potential
  • Unix fifos are great, let's use them
  • Understand linear interpolation better
  • Create a surreal demo on Twitch
  • Only have one binary to start, the game itself

As a first step of implementing this, I went through the source code of the Mario 64 PC port (but in theory this could also work for other emulators or even Nintendo 64 emulators with enough work) and began to look for anything that might be useful to understand how parts of the game work. I stumbled across src/pc/controller and then found two gems that really stood out. I found the interface for adding new input methods to the game and an example input method that read from tool-assisted speedrun recordings. The controller input interface itself is a thing of beauty, I've included a copy of it below:

// controller_api.h

#include <ultra64.h>

struct ControllerAPI {
    void (*init)(void);
    void (*read)(OSContPad *pad);


All you need to implement your own input method is an init function and a read function. The init function is used to set things up and the read function is called every frame to get inputs. The tool-assisted speedrunning input method seemed to conform to the Mupen64 demo file spec as described on tasvideos.org, and I ended up using this to help test and verify ideas.

The thing that struck me was how simple the format was. Every frame of input uses its own four-byte sequence. The constants in the demo file spec also helped greatly as I figured out ways to bridge into the game from Rust. I ended up creating two bitflag structs to help with the button data, which ended up almost being a 1:1 copy of the Mupen64 demo file spec:

bitflags! {
    // 0x0100 Digital Pad Right
    // 0x0200 Digital Pad Left
    // 0x0400 Digital Pad Down
    // 0x0800 Digital Pad Up
    // 0x1000 Start
    // 0x2000 Z
    // 0x4000 B
    // 0x8000 A
    pub(crate) struct HiButtons: u8 {
        const NONE = 0x00;
        const DPAD_RIGHT = 0x01;
        const DPAD_LEFT = 0x02;
        const DPAD_DOWN = 0x04;
        const DPAD_UP = 0x08;
        const START = 0x10;
        const Z_BUTTON = 0x20;
        const B_BUTTON = 0x40;
        const A_BUTTON = 0x80;


This is where things get interesting. One of the more interesting side effects of getting inputs over chat for a game like Mario 64 is that you need to hold buttons or even the analog stick in order to do things like jumping into paintings or on ledges. When you get inputs over chat, you only have them for one frame. Therefore you need some kind of analog input (or an emulation of that) that decays over time. One approach you can use for this is linear interpolation (or lerp).

I implemented support for both button and analog stick lerping using a struct I call a Lerper (the file it is in is named au.rs because .au. is the lojban emotion-particle for "to desire", the name was inspired from it seeming to fake what the desired inputs were).

At its core, a Lerper stores a few basic things:

  • the current scalar of where the analog input is resting
  • the frame number when the analog input was set to the max (or above)
  • the maximum number of frames that the lerp should run for
  • the goal (or where the end of the linear interpolation is, for most cases in this codebase the goal is 0, or neutral)
  • the maximum possible output to return on apply()
  • the minimum possible output to return on apply()

Every frame, the lerpers for every single input to the game will get applied down closer to zero. Mario 64 uses two signed bytes to represent the controller input. The maximum/minimum clamps make sure that the lerped result stays in that range.

Twitch Integration

This is one of the first times I have ever used asynchronous Rust in conjunction with synchronous rust. I was shocked at how easy it was to just spin up another thread and have that thread take care of the Tokio runtime, leaving the main thread to focus on input. This is the block of code that handles running the asynchronous twitch bot in parallel to the main thread:

pub(crate) fn run(st: MTState) {
    use tokio::runtime::Runtime;
        .expect("Failed to create Tokio runtime")

Then the rest of the Twitch integration is boilerplate until we get to the command parser. At its core, it just splits each chat line up into words and looks for keywords:

let chatline = msg.data.to_string();
let chatline = chatline.to_ascii_lowercase();
let mut data = st.write().unwrap();
const BUTTON_ADD_AMT: i64 = 64;

for cmd in chatline.to_string().split(" ").collect::<Vec<&str>>().iter() {
    match *cmd {
        "a" => data.a_button.add(BUTTON_ADD_AMT),
        "b" => data.b_button.add(BUTTON_ADD_AMT),
        "z" => data.z_button.add(BUTTON_ADD_AMT),
        "r" => data.r_button.add(BUTTON_ADD_AMT),
        "cup" => data.c_up.add(BUTTON_ADD_AMT),
        "cdown" => data.c_down.add(BUTTON_ADD_AMT),
        "cleft" => data.c_left.add(BUTTON_ADD_AMT),
        "cright" => data.c_right.add(BUTTON_ADD_AMT),
        "start" => data.start.add(BUTTON_ADD_AMT),
        "up" => data.sticky.add(127),
        "down" => data.sticky.add(-128),
        "left" => data.stickx.add(-128),
        "right" => data.stickx.add(127),
        "stop" => {data.stickx.update(0); data.sticky.update(0);},
        _ => {},

This implements the following commands:

Command Meaning
a Press the A button
b Press the B button
z Press the Z button
r Press the R button
cup Press the C-up button
cdown Press the C-down button
cleft Press the C-left button
cright Press the C-right button
start Press the start button
up Press up on the analog stick
down Press down on the analog stick
left Press left on the analog stick
stop Reset the analog stick to center

Currently analog stick inputs will stick for about 270 frames and button inputs will stick for about 20 frames before drifting back to neutral. The start button is special, inputs to the start button will stick for 5 frames at most.


Debugging two programs running together is surprisingly hard. I had to resort to the tried-and-true method of using gdb for the main game code and excessive amounts of printf debugging in Rust. The pretty_env_logger crate (which internally uses the env_logger crate, and its environment variable configures pretty_env_logger) helped a lot. One of the biggest problems I encountered in developing it was fixed by this patch, which I will paste inline:

diff --git a/gamebridge/src/main.rs b/gamebridge/src/main.rs
index 426cd3e..6bc3f59 100644
@@ -93,7 +93,7 @@ fn main() -> Result<()> {
-                sticky = match stickx {
+                sticky = match sticky {
                     0 => sticky,
                     127 => {
                         ymax_frame = data.frame;

Somehow I had been trying to adjust the y axis position of the stick by comparing the x axis position of the stick. Finding and fixing this bug is what made me write the Lerper type.

Altogether, this has been a very fun project. I've learned a lot about 3d game design, historical source code analysis and inter-process communication. I also learned a lot about asynchronous Rust and how it can work together with synchronous Rust. I also got to make a fairly surreal demo for Twitch. I hope this can be useful to others, even if it just serves as an example of how to integrate things into strange other things from unixy first principles.

You can find out slightly more about gamebridge on its GitHub page. Its repo also includes patches for the Mario 64 PC port source code, including one that disables the ability for Mario to lose lives. This could prove useful for Twitch plays attempts, the 5 life cap by default became rather limiting in testing.

Be well.

ReConLangMo 2: Phonology & Writing

Permalink - Posted on 2020-05-08 00:00

ReConLangMo 2: Phonology & Writing

Continuing from the last post, one of the next steps in this process is to outline the phonology and basic phonotactics of L'ewa. A language's phonology is the set of sounds that are allowed to be in words. The phonotactics of a language help people understand where the boundaries between syllables are. I will then describe my plans for the L'ewa orthography and how L'ewa is romanized. This is a response to the prompt made here.


I am taking inspiration from Lojban, Esperanto, Mandarin Chinese and English to design the phonology of L'ewa. All of the phonology will be defined using the International Phonetic Alphabet. If you want to figure out how to pronounce these sounds, a lazy trick is to google them. Wikipedia will have a perfectly good example to use as a reference. There are two kinds of sounds in L'ewa, consonants and vowels.


Consonant inventory: /d f g h j k l m n p q s t w ʃ ʒ ʔ ʙ̥/

Manner/Place Bilabial Alveolar Palato-alveolar Palatal Velar Labio-velar Uvular Glottal
Nasal m n
Stop p t d k g q ʔ
Fricative f s ʃ ʒ h
Approximant j w
Trill ʙ̥ r
Lateral approximant l

The weirdest consonant is /ʙ̥/, which is a voiceless bilabial trill, or blowing air through your lips without making sound. This is intended to imitate a noise an orca would make.


Vowel inventory: /a ɛ i o u/

Diphthongs: au, oi, ua, ue, uo, ai, ɛi

Front Back
High i u
High-mid o
Low-mid ɛ
Low a


I plan to have two main kinds of words in L'ewa. I plan to have content and particle words. The content words will refer to things, properties, or actions (such as tool, red, run) and the particle words will change how the grammar of a sentence works (such as the or prepositions).

The main kind of content word is a root word, and they will be in the following forms:

  • CVCCV (/ʒa.sko/)
  • CCVCV (/lʔ.ɛwa/)

Particles will mostly fall into the following forms:

  • V (/a/)
  • VV (/ai/)
  • CV (/ba/)
  • CVV (/bai/)
  • CV'V (/baʔ.i)

Proper names should end with consonants, but there is no hard requirement.

L'ewa is a stressed language, with stress on the second-to-last (penultimate) syllable. For example, the word "[z]asko" would be pronounced "[Z]Asko".

Syllables end on stop consonants if one is present in a consonant cluster. Two stop consonants cannot follow eachother in a row.


I haven't completely fleshed this part out yet, but I want the writing system of L'ewa to be an abugida. This is a kind of written script that has the consonants make the larger shapes but the vowels are small diacritics over the consonants. If the word creation process is done right, you can actually omit the vowels entirely if they are not relevant.

I plan to have this script be written by hand with pencils/pen and typed into computers, just like English. This script will also be a left-to-right script like English.


L'ewa's romanization is intentionally simple. Most of the IPA letters keep their letters, but the ones that do not match to Latin letters are listed below:

Pronunciation Spelling
/j/ y
/ɛ/ e
/ʃ/ x
/ʒ/ z
/ʔ/ '
/ʙ̥/ b

This is designed to make every letter typeable on a standard US keyboard, as well as mapping as many letters as possible on the home row of a QWERTY keyboard.

I am still working on the tooling for word creation and the like. I plan to use the Swaedish lists (this site is having certificate issues at the time of writing this post) to help guide the creation of a base vocabulary. I will go into more detail in the future.

Super Bootable 64

Permalink - Posted on 2020-05-06 00:00

Super Bootable 64

Super Mario 64 was the launch title of the Nintendo 64 in 1996. This game revolutionized an entire generation and everything following it by delivering fast, smooth and fun 3d platforming gameplay to gamers all over the world. This game is still played today by speedrunners, who do everything from beating it while collecting every star, the minimum amount of stars normally required, 0 stars and without pressing the A jump button.

This game was the launch title of the Nintendo 64. As such, the SDK used to develop it was pre-release and had an optimization bug that forced the game to be shipped without optimizations due to random crashiness issues (watch the linked video for more information on this than I can summarize here). Remember that the Nintendo 64 shipped games on write-once ROM cartridges, so any bug that could cause the game to crash randomly was fatal.

When compiling something without optimizations, the output binary is effectively a 1:1 copy of the input source code. This means that exceptionally clever people could theoretically go in, decompile your code and then create identical source code that could be used to create a byte-for-byte identical copy of your program's binary. But surely nobody would do that, that would be crazy, wouldn't it?

Noooo! You can't just port a Nintendo 64 game to LibGL! They're completely different hardware! It wouldn't respect the wishes of the creators! Hahaha porting machine go brrrrrrrr

Someone did. The fruits of this effort are available here. This was mostly a proof of concept and is a masterpiece in its own right. However, because it was decompiled, this means that the engine itself could theoretically be ported to run on any other platform such as Windows, Linux, the Nintendo Switch or even a browser.

Someone did this and ended up posting it on 4chan. Thanks to a friend, I got my hands on the Linux-compatible source code of this port and made an archive of it on my git server. My fork of it has only minimal changes needed for it to build in NixOS.

nixos-generators is a tool that lets you create custom NixOS system definitions based on a NixOS module as input. So, let's create a bootable ISO of Super Mario 64 running on Linux!


You will need an amd64 Linux system. NixOS is preferable, but any Linux system should theoretically work. You will also need the following things:

  • sm64.us.z64 (the release rom of Super Mario 64 in the US version 1.0) with an sha1 sum of 9bef1128717f958171a4afac3ed78ee2bb4e86ce
  • nixos-generators installed (nix-env -f https://github.com/nix-community/nixos-generators/archive/master.tar.gz -i)

So, let's begin by creating a folder named boot2sm64:

$ mkdir ~/code/boot2sm64

Then let's create a file called configuration.nix and put some standard boilerplate into it:

# configuration.nix

{ pkgs, lib, ... }:

  networking.hostName = "its-a-me";

And then let's add dwm as the window manager. This setup will be a little bit more complicated because we are going to need to add a custom configuration as well as a patch to the source code for auto-starting Super Mario 64. Create a folder called dwm and run the following commands in it to download the config we need and the autostart patch:

$ mkdir dwm
$ cd dwm
$ wget -O autostart.patch https://dwm.suckless.org/patches/autostart/dwm-autostart-20161205-bb3bd6f.diff
$ wget -O config.h https://gist.githubusercontent.com/Xe/f5fae8b7a0d996610707189d2133041f/raw/7043ca2ab5f8cf9d986aaa79c5c505841945766c/dwm_config.h

And then add the following before the opening curly brace:

{ pkgs, lib, ... }:

  dwm = with pkgs;
    let name = "dwm-6.2";
    in stdenv.mkDerivation {
      inherit name;

      src = fetchurl {
        url = "https://dl.suckless.org/dwm/${name}.tar.gz";
        sha256 = "03hirnj8saxnsfqiszwl2ds7p0avg20izv9vdqyambks00p2x44p";

      buildInputs = with pkgs; [ xorg.libX11 xorg.libXinerama xorg.libXft ];

      prePatch = ''sed -i "s@/usr/local@$out@" config.mk'';
      postPatch = ''
        cp ${./dwm/config.h} ./config.h

      patches = [ ./dwm/autostart.patch ];

      buildPhase = " make ";

      meta = {
        homepage = "https://suckless.org/";
        description = "Dynamic window manager for X";
        license = stdenv.lib.licenses.mit;
        maintainers = with stdenv.lib.maintainers; [ viric ];
        platforms = with stdenv.lib.platforms; all;
in {
  environment.systemPackages = with pkgs; [ hack-font st dwm ];

  networking.hostName = "its-a-me";

Now let's create the mario user:

  # ...
  users.users.mario = { isNormalUser = true; };
  system.activationScripts = {
    base-dirs = {
      text = ''
        mkdir -p /nix/var/nix/profiles/per-user/mario
      deps = [ ];
  services.xserver.windowManager.session = lib.singleton {
    name = "dwm";
    start = ''
      ${dwm}/bin/dwm &

  services.xserver.enable = true;
  services.xserver.displayManager.defaultSession = "none+dwm";
  services.xserver.displayManager.lightdm.enable = true;
  services.xserver.displayManager.lightdm.autoLogin.enable = true;
  services.xserver.displayManager.lightdm.autoLogin.user = "mario";

The autostart file is going to be located in /home/mario/.dwm/autostart.sh. We could try and place it manually on the filesystem with a NixOS module, or we could use home-manager to do this for us. Let's have home-manager do this for us. First, install home-manager:

$ nix-channel --add https://github.com/rycee/home-manager/archive/release-20.03.tar.gz home-manager
$ nix-channel --update

Then let's add home-manager to this config:

  # ...

  imports = [ <home-manager/nixos> ];

  home-manager.users.mario = { config, pkgs, ... }: {
    home.file = {
      ".dwm/autostart.sh" = {
        executable = true;
        text = ''
          export LIBGL_ALWAYS_SOFTWARE=1 # will be relevant later

Now, for the creme de la creme of this project, let's build Super Mario 64. You will need to get the base rom into your system's Nix store somehow. A half decent way to do this is with quickserv:

$ nix-env -if https://tulpa.dev/Xe/quickserv/archive/master.tar.gz
$ cd /path/to/folder/with/baserom.us.z64
$ quickserv -dir . -port 9001 &
$ nix-prefetch-url

This will pre-populate your Nix store with the rom and should return the following hash:


If this hash is wrong, then you need to find the correct rom. I cannot help you with this.

Now, let's create a simple derivation for the Super Mario 64 PC port. I have a tweaked version that is optimized for NixOS, which we will use for this. Add the following between the dwm package define and the in statement:

# ...
  sm64pc = with pkgs;
      baserom = fetchurl {
        url = "";
        sha256 = "148xna5lq2s93zm0mi2pmb98qb5n9ad6sv9dky63y4y68drhgkhp";
    in stdenv.mkDerivation rec {
      pname = "sm64pc";
      version = "latest";

      buildInputs = [

      src = fetchgit {
        url = "https://tulpa.dev/saved/sm64pc";
        rev = "c69c75bf9beed9c7f7c8e9612e5e351855065120";
        sha256 = "148pk9iqpcgzwnxlcciqz0ngy6vsvxiv5lp17qg0bs7ph8ly3k4l";

      buildPhase = ''
        chmod +x ./extract_assets.py
        cp ${baserom} ./baserom.us.z64

      installPhase = ''
        mkdir -p $out/bin
        cp ./build/us_pc/sm64.us.f3dex2e $out/bin/sm64pc

      meta = with stdenv.lib; {
        description = "Super Mario 64 PC port, requires rom :)";
# ...

And then add sm64pc to the system packages:

  # ...
  environment.systemPackages = with pkgs; [ st hack-font dwm sm64pc ];
  # ...

As well as to the autostart script from before:

  # ...
  home-manager.users.mario = { config, pkgs, ... }: {
    home.file = {
      ".dwm/autostart.sh" = {
        executable = true;
        text = ''
          export LIBGL_ALWAYS_SOFTWARE=1


Finally let's enable some hardware support so it's easier to play this bootable game:

  # ...
  hardware.pulseaudio.enable = true;
  virtualisation.virtualbox.guest.enable = true;
  virtualisation.vmware.guest.enable = true;

Altogether you should have a configuration.nix that looks like this.

So let's build the ISO!

$ nixos-generate -f iso -c configuration.nix

Much output later, you will end up with a path that will look something like this:


This is your bootable image of Super Mario 64. Copy it to a good temporary folder (like your downloads folder):

cp /nix/store/fzk3psrd3m6x437m6xh9pc7bnv2v44ax-nixos.iso/iso/nixos.iso ~/Downloads/mario64.iso

Now you are free to do whatever you want with this, including booting it in a virtual machine.

This is why I use NixOS. It enables me to do absolutely crazy things like creating a bootable ISO of Super Mario 64 without having to understand how to create ISO files by hand or how bootloaders on Linux work in ISO files.

It Just Works.

ReConLangMo 1: Name, Context, History

Permalink - Posted on 2020-05-05 00:00

ReConLangMo 1: Name, Context, History

I've been curious about how language works for a very long time. This curiosity has lead me down many fascinating rabbit holes, but for a long time I have either been cribbing off of other people's work or studying natural languages that don't have a cohesive plan or core to them. Constructed Languages (or conlangs as I will probably be calling them from here on out) are a simpler model of this. You might be familiar with Klingon from the Star Trek series, the various forms of Elvish as described by J. R. R. Tolkien or Dothraki from Game of Thrones. This series will show an example of how one of those kinds of languages are created.

Recently a challenge came up on /r/conlangs called ReConLangMo and I've decided to take a stab at this and flesh this out into a personal language.

This post will be the first in a series (with articles to be listed below) and is following the prompt made here.

L'ewa Overview

The language I am going to create will be called L'ewa (⁄l.ʔɛ.wa⁄, also romanized lewa for filesystems). This word is identical in English and in L'ewa. It means "is a language". The name came to me in a shower a while ago and I'm not entirely sure where it came from.

This language is being designed as a personal language to help me keep a diary (more on that later) and to act as a testbed for writing a computational knowledge engine, much like IBM's Watson. I do not expect anyone else to use this language. I may pull this language into fiction (if that ever gets off the ground) or into other projects as it makes sense.

Some of the high level things I want to try in this language are ways to make me think differently. I'm following the weak form of the Sapir-Whorf hypothesis by this logic. I want to see what would happen if I give myself a tool that I can use to help myself think in different ways. Other features I plan to include are:

  • A seximal number system
  • A predicate-argument system similar to Lojban
  • Nounlessness (only having verbs for content words) like Salishan languages
  • An a-priori (or made up) vocabulary
  • Grammatical markers for the identity of the thinker of a sentence/phrase/word
  • Make each grammatical feature and word logical, or working in one way only
  • Typeable with standard QWERTY en-US keyboards
  • A decorative script that I'll turn into a font

L'wea as A Diary Language

When I was younger, I used to keep a diary/journal file on my computers off and on. I was detailed about what I was feeling and what I was considering and going through. This all ended abruptly after my parents were snooping through my computer in middle school and discovered that I was questioning fundamental aspects of myself like my gender. I have never really felt comfortable keeping a diary file since then. I have made a few attempts at this (including by using a dedicated diary machine, air-gapped TempleOS machines and the like), but they all feel too vulnerable and open for anyone to read them.

This is my logic for using a language that I create for myself. If people really want to go through and take the time to learn the ins and outs of a tool I created for myself to archive my personal thoughts, they probably deserve to be able to read them. Otherwise, this would allow me to write my diary from pretty much anywhere, even in plain sight out in public. People can't shoulder-surf and read what they literally cannot understand.

I plan to continue going through this series as the prompts come out and will put my responses on my blog along with explanations, analysis and sample code (where relevant). I will probably also reformat these posts (and relevant dictionary files) to an eBook and later into a reference grammar book.

Like I said though, this project is for myself. I do not expect this language to change the world for anyone but me. Let's see where this rabbit hole goes.

My NixOS Desktop Flow

Permalink - Posted on 2020-04-25 00:00

My NixOS Desktop Flow

Before I built my current desktop, I had been using a 2013 Mac Pro for at least 7 years. This machine has seen me through living in a few cities (Bellevue, Mountain View and Montreal), but it was starting to show its age. Its 12 core Xeon is really no slouch (scoring about 5 minutes in my "compile the linux kernel" test), but with Intel security patches it was starting to get slower and slower as time went on.

So in March (just before the situation started) I ordered the parts for my new tower and built my current desktop machine. From the start, I wanted it to run Linux and have 64 GB of ram, mostly so I could write and test programs without having to worry about ram exhaustion.

When the parts were almost in, I had decided to really start digging into NixOS. Friends on IRC and Discord had been trying to get me to use it for years, and I was really impressed with a simple setup that I had in a virtual machine. So I decided to jump head-first down that rabbit hole, and I'm honestly really glad I did.

NixOS is built on a more functional approach to package management called Nix. Parts of the configuration can be easily broken off into modules that can be reused across machines in a deployment. If Ansible or other tools like it let you customize an existing Linux distribution to meet your needs, NixOS allows you to craft your own Linux distribution around your needs.

Unfortunately, the Nix and NixOS documentation is a bit more dense than most other Linux programs/distributions are, and it's a bit easy to get lost in it. I'm going to attempt to explain a lot of the guiding principles behind Nix and NixOS and how they fit into how I use NixOS on my desktop.

What is a Package?

Earlier, I mentioned that Nix is a functional package manager. This means that Nix views packages as a combination of inputs to get an output:

A nix package is the metadata, the source code, the build instructions and some patches as input to a derivation to create a package

This is how most package managers work (even things like Windows installer files), but Nix goes a step further by disallowing package builds to access the internet. This allows Nix packages to be a lot more reproducible; meaning if you have the same inputs (source code, build script and patches) you should always get the same output byte-for-byte every time you build the same package at the same version.

A Simple Package

Let's consider a simple example, my gruvbox-inspired CSS file's default.nix file':

{ pkgs ? import <nixpkgs> { } }:

pkgs.stdenv.mkDerivation {
  pname = "gruvbox-css";
  version = "latest";
  src = ./.;
  phases = "installPhase";
  installPhase = ''
    mkdir -p $out
    cp -rf $src/gruvbox.css $out/gruvbox.css

This creates a package named gruvbox-css with the version latest. Let's break this down its default.nix line by line:

{ pkgs ? import <nixpkgs> { } }:

This creates a function that either takes in the pkgs object or tells Nix to import the standard package library nixpkgs as pkgs. nixpkgs includes a lot of utilities like a standard packaging environment, special builders for things like snaps and Docker images as well as one of the largest package sets out there.

pkgs.stdenv.mkDerivation {
  # ...

This runs the stdenv.mkDerivation function with some arguments in an object. The "standard environment" comes with tools like GCC, bash, coreutils, find, sed, grep, awk, tar, make, patch and all of the major compression tools. This means that our package builds can build C/C++ programs, copy files to the output, and extract downloaded source files by default. You can add other inputs to this environment if you need to, but for now it works as-is.

Let's specify the name and version of this package:

pname = "gruvbox-css";
version = "latest";

pname stands for "package name". It is combined with the version to create the resulting package name. In this case it would be gruvbox-css-latest.

Let's tell Nix how to build this package:

src = ./.;
phases = "installPhase";
installPhase = ''
  mkdir -p $out
  cp -rf $src/gruvbox.css $out/gruvbox.css

The src attribute tells Nix where the source code of the package is stored. Sometimes this can be a URL to a compressed archive on the internet, sometimes it can be a git repo, but for now it's the current working directory ./..

This is a CSS file, it doesn't make sense to have to build these, so we skip the build phase and tell Nix to directly install the package to its output folder:

mkdir -p $out
cp -rf $src/gruvbox.css $out/gruvbox.css

This two-liner shell script creates the output directory (usually exposed as $out) and then copies gruvbox.css into it. When we run this through Nix withnix-build, we get output that looks something like this:

$ nix-build ./default.nix
these derivations will be built:
building '/nix/store/c99n4ixraigf4jb0jfjxbkzicd79scpj-gruvbox-css.drv'...

And /nix/store/ng5qnhwyrk9zaidjv00arhx787r0412s-gruvbox-css is the output package. Looking at its contents with ls, we see this:

$ ls /nix/store/ng5qnhwyrk9zaidjv00arhx787r0412s-gruvbox-css

A More Complicated Package

For a more complicated package, let's look at the build directions of the website you are reading right now:

{ pkgs ? import (import ./nix/sources.nix).nixpkgs }:
with pkgs;

assert lib.versionAtLeast go.version "1.13";

buildGoPackage rec {
  pname = "christinewebsite";
  version = "latest";
  goPackagePath = "christine.website";
  src = ./.;
  goDeps = ./nix/deps.nix;
  allowGoReference = false;

  preBuild = ''
    export CGO_ENABLED=0
    buildFlagsArray+=(-pkgdir "$TMPDIR")

  postInstall = ''
    cp -rf $src/blog $bin/blog
    cp -rf $src/css $bin/css
    cp -rf $src/gallery $bin/gallery
    cp -rf $src/signalboost.dhall $bin/signalboost.dhall
    cp -rf $src/static $bin/static
    cp -rf $src/talks $bin/talks
    cp -rf $src/templates $bin/templates

Breaking it down, we see some similarities to the gruvbox-css package from above, but there's a few more interesting lines I want to point out:

{ pkgs ? import (import ./nix/sources.nix).nixpkgs }:

My website uses a pinned or fixed version of nixpkgs. This allows my website's deployment to be stable even if nixpkgs changes something that could cause it to break.

with pkgs;

With expressions are one of the more interesting parts of Nix. Essentially, they let you say "everything in this object should be put into scope". So if you have an expression that does this:

  foo = {
    ponies = "awesome";
in with foo; "ponies are ${ponies}!"

You get the result "ponies are awesome!". I use with pkgs here to use things directly from nixpkgs without having to say pkgs. in front of a lot of things.

assert lib.versionAtLeast go.version "1.13";

This line will make the build fail if Nix is using any Go version less than 1.13. I'm pretty sure my website's code could function on older versions of Go, but the runtime improvements are important to it, so let's fail loudly just in case.

buildGoPackage {
  # ...

buildGoPackage builds a Go package into a Nix package. It takes in the Go package path, list of dependencies and if the resulting package is allowed to depend on the Go compiler or not.

It will then compile the Go program (and all of its dependencies) into a binary and put that in the resulting package. This website is more than just the source code, it's also got assets like CSS files and the image earlier in the post. Those files are copied in the postInstall phase:

postInstall = ''
  cp -rf $src/blog $bin/blog
  cp -rf $src/css $bin/css
  cp -rf $src/gallery $bin/gallery
  cp -rf $src/signalboost.dhall $bin/signalboost.dhall
  cp -rf $src/static $bin/static
  cp -rf $src/talks $bin/talks
  cp -rf $src/templates $bin/templates

This results in all of the files that my website needs to run existing in the right places.

Other Packages

For more kinds of packages that you can build, see the Languages and Frameworks chapter of the nixpkgs documentation.

If your favorite language isn't shown there, you can make your own build script and do it more manually. See here for more information on how to do that.

nix-env And Friends

Building your own packages is nice and all, but what about using packages defined in nixpkgs? Nix includes a few tools that help you find, install, upgrade and remove packages as well as nix-build to build new ones.

nix search

When looking for a package to install, use $ nix search name to see if it's already packaged. For example, let's look for graphviz, a popular diagramming software:

$ nix search graphviz

* nixos.graphviz (graphviz)
  Graph visualization tools

* nixos.graphviz-nox (graphviz)
  Graph visualization tools

* nixos.graphviz_2_32 (graphviz)
  Graph visualization tools

There are several results here! These are different because sometimes you may want some features of graphviz, but not all of them. For example, a server installation of graphviz wouldn't need X windows support.

The first line of the output is the attribute. This is the attribute that the package is imported to inside nixpkgs. This allows multiple packages in different contexts to exist in nixpkgs at the same time, for example with python 2 and python 3 versions of a library.

The second line is a description of the package from its metadata section.

The nix tool allows you to do a lot more than just this, but for now this is the most important thing.

nix-env -i

nix-env is a rather big tool that does a lot of things (similar to pacman in Arch Linux), so I'm going to break things down into separate sections.

Let's pick an instance graphviz from before and install it using nix-env:

$ nix-env -iA nixos.graphviz
installing 'graphviz-2.42.2'
these paths will be fetched (5.00 MiB download, 13.74 MiB unpacked):
copying path '/nix/store/980jk7qbcfrlnx8jsmdx92q96wsai8mx-gts-0.7.6' from 'https://cache.nixos.org'...
copying path '/nix/store/s895dnwlprwpfp75pzq70qzfdn8mwfzc-lcms-1.19' from 'https://cache.nixos.org'...
copying path '/nix/store/jy35xihlnb3az0vdksyg9rd2f38q2c01-libdevil-1.7.8' from 'https://cache.nixos.org'...
copying path '/nix/store/fij1p8f0yjpv35n342ii9pwfahj8rlbb-graphviz-2.42.2' from 'https://cache.nixos.org'...
building '/nix/store/r4fqdwpicqjpa97biis1jlxzb4ywi92b-user-environment.drv'...
created 664 symlinks in user environment

And now let's see where the dot tool from graphviz is installed to:

$ which dot

$ readlink /home/cadey/.nix-profile/bin/dot

This lets you install tools into the system-level Nix store without affecting other user's environments, even if they depend on a different version of graphviz.

nix-env -e

nix-env -e lets you uninstall packages installed with nix-env -i. Let's uninstall graphviz:

$ nix-env -e graphviz

Now the dot tool will be gone from your shell:

$ which dot
which: no dot in (/run/wrappers/bin:/home/cadey/.nix-profile/bin:/etc/profiles/per-user/cadey/bin:/nix/var/nix/profiles/default/bin:/run/current-system/sw/bin)

And it's like graphviz was never installed.

Notice that these package management commands are done at the user level because they are only affecting the currently logged-in user. This allows users to install their own editors or other tools without having to get admins involved.

Adding up to NixOS

NixOS builds on top of Nix and its command line tools to make an entire Linux distribution that can be perfectly crafted to your needs. NixOS machines are configured using a configuration.nix file that contains the following kinds of settings:

  • packages installed to the system
  • user accounts on the system
  • allowed SSH public keys for users on the system
  • services activated on the system
  • configuration for services on the system
  • magic unix flags like the number of allowed file descriptors per process
  • what drives to mount where
  • network configuration
  • ACME certificates

and so much more

At a high level, machines are configured by setting options like this:

# basic-lxc-image.nix
{ config, pkgs, ... }:

  networking.hostName = "example-for-blog";
  environment.systemPackages = with pkgs; [ wget vim ];

This would specify a simple NixOS machine with the hostname example-for-blog and with wget and vim installed. This is nowhere near enough to boot an entire system, but is good enough for describing the base layout of a basic LXC image.

For a more complete example of NixOS configurations, see here or repositories on this handy NixOS wiki page.

The main configuration.nix file (usually at /etc/nixos/configuration.nix) can also import other NixOS modules using the imports attribute:

# better-vm.nix
{ config, pkgs, ... }:

  imports = [
  networking.hostName = "better-vm";
  services.nginx.enable = true;

And the better-vm.nix file would describe a machine with the hostname better-vm that has wget and vim installed, but is also running nginx with its default configuration.

Internally, every one of these options will be fed into auto-generated Nix packages that will describe the system configuration bit by bit.


One of the handy features about Nix is that every package exists in its own part of the Nix store. This allows you to leave the older versions of a package laying around so you can roll back to them if you need to. nixos-rebuild is the tool that helps you commit configuration changes to the system as well as roll them back.

If you want to upgrade your entire system:

$ sudo nixos-rebuild switch --upgrade

This tells nixos-rebuild to upgrade the package channels, use those to create a new base system description, switch the running system to it and start/restart/stop any services that were added/upgraded/removed during the upgrade. Every time you rebuild the configuration, you create a new "generation" of configuration that you can roll back to just as easily:

$ sudo nixos-rebuild switch --rollback

Garbage Collection

As upgrades happen and old generations pile up, this may end up taking up a lot of unwanted disk (and boot menu) space. To free up this space, you can use nix-collect-garbage:

$ sudo nix-collect-garbage
< cleans up packages not referenced by anything >

$ sudo nix-collect-garbage -d
< deletes old generations and then cleans up packages not referenced by anything >

The latter is a fairly powerful command and can wipe out older system states. Only run this if you are sure you don't want to go back to an older setup.

How I Use It

Each of these things builds on top of eachother to make the base platform that I built my desktop environment on. I have the configuration for my shell, emacs, my window manager and just about every program I use on a regular basis defined in their own NixOS modules so I can pick and choose things for new machines.

When I want to change part of my config, I edit the files responsible for that part of the config and then rebuild the system to test it. If things work properly, I commit those changes and then continue using the system like normal.

This is a little bit more work in the short term, but as a result I get a setup that is easier to recreate on more machines in the future. It took me a half hour or so to get the configuration for zathura right, but now I have a zathura module that lets me get exactly the setup I want every time.


Nix and NixOS ruined me. It's hard to go back.

Chicken Stir Fry

Permalink - Posted on 2020-04-13 00:00

Chicken Stir Fry

This recipe was made up by me and my fiancé. We just sorta winged it every time we made it until we found something that was easy to cook and tasty. We make this every week or so.



  • Pack of 4 chicken breasts
  • A fair amount of Montreal seasoning (garlic, onion, salt, oregano)
  • 3 cups basmati rice
  • 3.75 cups water
  • 1/4th bag of frozen stir fry vegetables
  • Avocado/coconut oil
  • Standard frying pan
  • Standard chef's knife
  • Standard 11x14 cutting board
  • Two metal bowls
  • Instant Pot
  • Spatula


Put the seasoning in one of the bowls and unwrap the plastic around the chicken breasts. Take each chicken breast out of the package (you may need to cut them free of eachother, use a sharp knife for that) and rub all sides of it around in the seasoning.

Put these into the other metal bowl and when you've done all four, cover with plastic wrap and refrigerate for about 5-6 hours.

Doing this helps to have the chicken soak up the flavor of the seasoning so it tastes better when you cook it.


Slice two chicken breasts up kinda like this and then transfer to the heated pan with oil in it. Cook those and flip them every few minutes until you've cooked everything all the way through (random sampling by cutting a bit of chicken in half with the spatula and seeing if it's overly juicy or not is a good way to tell, or if you have a food thermometer to 165 degrees fahrenheit or 75 degrees celsius). Put this chicken into a plastic container for use in other meals (it goes really good on sandwiches).

Then repeat the slicing and cooking for the last two chicken breasts. However, this time put half of the chicken into the plastic container you used before (about one chicken breast worth in total, it doesn't have to be exact). At the same time as the second round of chicken is cooking, put about 3 cups of rice and 3.75 cups of water into the instant pot; then seal it and set it to manual for 4 minutes.

Dump frozen vegetables on top of the remainder of the chicken and stir until the vegetables are warm.


Serve the stir fry hot on a bed of rice.

image of the food

pa'i Benchmarks

Permalink - Posted on 2020-03-26 00:00

pa'i Benchmarks

In my last post I mentioned that pa'i was faster than Olin's cwa binary written in go without giving any benchmarks. I've been working on new ways to gather and visualize these benchmarks, and here they are.

Benchmarking WebAssembly implementations is slightly hard. A lot of existing benchmark tools simply do not run in WebAssembly as is, not to mention inside the Olin ABI. However, I have created a few tasks that I feel represent common tasks that pa'i (and later wasmcloud) will run:

  • compressing data with Snappy
  • parsing JSON
  • parsing yaml
  • recursive fibbonacci number calculation
  • blake-2 hashing

As always, if you don't trust my numbers, you don't have to. Commands will be given to run these benchmarks on your own hardware. This may not be the most scientifically accurate benchmarks possible, but it should help to give a reasonable idea of the speed gains from using Rust instead of Go.

You can run these benchmarks in the docker image xena/pahi. You may need to replace ./result/ with / for running this inside Docker.

$ docker run --rm -it xena/pahi bash -l

Compressing Data with Snappy

This is implemented as cpustrain.wasm. Here is the source code used in the benchmark:


extern crate olin;

use olin::{entrypoint, Resource};
use std::io::Write;


fn main() -> Result<(), std::io::Error> {
    let fout = Resource::open("null://").expect("opening /dev/null");
    let data = include_bytes!("/proc/cpuinfo");

    let mut writer = snap::write::FrameEncoder::new(fout);

    for _ in 0..256 {
        // compressed data


This compresses my machine's copy of /proc/cpuinfo 256 times. This number was chosen arbitrarily.

Here are the results I got from the following command:

$ hyperfine --warmup 3 --prepare './result/bin/pahi result/wasm/cpustrain.wasm' \
        './result/bin/cwa result/wasm/cpustrain.wasm' \
        './result/bin/pahi --no-cache result/wasm/cpustrain.wasm' \
        './result/bin/pahi result/wasm/cpustrain.wasm'
CPU cwa pahi --no-cache pahi multiplier
Ryzen 5 3600 2.392 seconds 38.6 milliseconds 17.7 milliseconds pahi is 135 times faster than cwa
Intel Xeon E5-1650 7.652 seconds 99.3 milliseconds 53.7 milliseconds pahi is 142 times faster than cwa

Parsing JSON

This is implemented as bigjson.wasm. Here is the source code of the benchmark:


extern crate olin;

use olin::entrypoint;
use serde_json::{from_slice, to_string, Value};


fn main() -> Result<(), std::io::Error> {
    let input = include_bytes!("./bigjson.json");

    if let Ok(val) = from_slice(input) {
        let v: Value = val;
        if let Err(_why) = to_string(&v) {
            return Err(std::io::Error::new(
                "oh no json encoding failed!",
    } else {
        return Err(std::io::Error::new(
            "oh no json parsing failed!",


This decodes and encodes this rather large json file. This is a very large file (over 64k of json) and should represent over 65536 times times the average json payload size.

Here are the results I got from the following command:

$ hyperfine --warmup 3 --prepare './result/bin/pahi result/wasm/bigjson.wasm' \
        './result/bin/cwa result/wasm/bigjson.wasm' \
        './result/bin/pahi --no-cache result/wasm/bigjson.wasm' \
        './result/bin/pahi result/wasm/bigjson.wasm'
CPU cwa pahi --no-cache pahi multiplier
Ryzen 5 3600 257 milliseconds 49.4 milliseconds 20.4 milliseconds pahi is 12.62 times faster than cwa
Intel Xeon E5-1650 935.5 milliseconds 135.4 milliseconds 101.4 milliseconds pahi is 9.22 times faster than cwa

Parsing yaml

This is implemented as k8sparse.wasm. Here is the source code of the benchmark:


extern crate olin;

use olin::entrypoint;
use serde_yaml::{from_slice, to_string, Value};


fn main() -> Result<(), std::io::Error> {
    let input = include_bytes!("./k8sparse.yaml");

    if let Ok(val) = from_slice(input) {
        let v: Value = val;
        if let Err(_why) = to_string(&v) {
            return Err(std::io::Error::new(
                "oh no yaml encoding failed!",
        } else {
            return Err(std::io::Error::new(
                "oh no yaml parsing failed!",


This decodes and encodes this kubernetes manifest set from my cluster. This is a set of a few normal kubernetes deployments and isn't as much of a worse-case scenario as it could be with the other tests.

Here are the results I got from running the following command:

$ hyperfine --warmup 3 --prepare './result/bin/pahi result/wasm/k8sparse.wasm' \
        './result/bin/cwa result/wasm/k8sparse.wasm' \
        './result/bin/pahi --no-cache result/wasm/k8sparse.wasm' \
        './result/bin/pahi result/wasm/k8sparse.wasm'
CPU cwa pahi --no-cache pahi multiplier
Ryzen 5 3600 211.7 milliseconds 125.3 milliseconds 8.5 milliseconds pahi is 25.04 times faster than cwa
Intel Xeon E5-1650 674.1 milliseconds 342.7 milliseconds 30.8 milliseconds pahi is 21.85 times faster than cwa

Recursive Fibbonacci Number Calculation

This is implemented as fibber.wasm. Here is the source code used in the benchmark:


extern crate olin;

use olin::{entrypoint, log};


fn fib(n: u64) -> u64 {
    if n <= 1 {
        return 1;
    fib(n - 1) + fib(n - 2)

fn main() -> Result<(), std::io::Error> {

Fibbonacci number calculation done recursively is an incredibly time-complicated ordeal. This is the worst possible case for this kind of calculation, as it doesn't cache results from the fib function.

Here are the results I got from running the following command:

$ hyperfine --warmup 3 --prepare './result/bin/pahi result/wasm/fibber.wasm' \
        './result/bin/cwa result/wasm/fibber.wasm' \
        './result/bin/pahi --no-cache result/wasm/fibber.wasm' \
        './result/bin/pahi result/wasm/fibber.wasm'
CPU cwa pahi --no-cache pahi multiplier
Ryzen 5 3600 13.6 milliseconds 13.7 milliseconds 2.7 milliseconds pahi is 5.13 times faster than cwa
Intel Xeon E5-1650 41.0 milliseconds 27.3 milliseconds 7.2 milliseconds pahi is 5.70 times faster than cwa

Blake-2 Hashing

This is implemented as blake2stress.wasm. Here's the source code for this benchmark:


extern crate olin;

use blake2::{Blake2b, Digest};
use olin::{entrypoint, log};


fn main() -> Result<(), std::io::Error> {
    let json: &'static [u8] = include_bytes!("./bigjson.json");
    let yaml: &'static [u8] = include_bytes!("./k8sparse.yaml");
    for _ in 0..8 {
        let mut hasher = Blake2b::new();


This runs the blake2b hashing algorithm on the JSON and yaml files used earlier eight times. This is supposed to represent a few hundred thousand invocations of production code.

Here are the results I got from running the following command:

$ hyperfine --warmup 3 --prepare './result/bin/pahi result/wasm/blake2stress.wasm' \
        './result/bin/cwa result/wasm/blake2stress.wasm' \
        './result/bin/pahi --no-cache result/wasm/blake2stress.wasm' \
        './result/bin/pahi result/wasm/blake2stress.wasm'
CPU cwa pahi --no-cache pahi multiplier
Ryzen 5 3600 358.7 milliseconds 17.4 milliseconds 5.0 milliseconds pahi is 71.76 times faster than cwa
Intel Xeon E5-1650 1.351 seconds 35.5 milliseconds 11.7 milliseconds pahi is 115.04 times faster than cwa


From these tests, we can roughly conclude that pa'i is about 54 times faster than Olin's cwa tool. A lot of this speed gain is arguably the result of pa'i using an ahead of time compiler (namely cranelift as wrapped by wasmer). The compilation time also became a somewhat notable factor for comparing performance too, however the compilation cost only has to be eaten once.

Another conclusion I've made is very unsurprising. My old 2013 mac pro with an Intel Xeon E5-1650 is significantly slower in real-world computing tasks than the new Ryzen 5 3600. Both of these machines were using the same nix closure for running the binaries and they are running NixOS 20.03.

As always, if you have any feedback for what other kinds of benchmarks to run and how these benchmarks were collected, I welcome it. Please comment wherever this article is posted or contact me.

Here are the /proc/cpuinfo files for each machine being tested:

If you run these benchmarks on your own hardware and get different data, please let me know and I will be more than happy to add your results to these tables. I will need the CPU model name and the output of hyperfine for each of the above commands.

New Site Feature: Signal Boosting

Permalink - Posted on 2020-03-20 00:00

New Site Feature: Signal Boosting

In light of the COVID-19 pandemic, people have been losing their jobs. In normal times, this would be less of an issue, but in the middle of the pandemic, HR departments have been reluctant to hire people as entire companies suddenly switch to remote work. I feel utterly powerless during this outbreak. I can only image what people who have lost their job feel.

I've decided to do what I can to help. I have created a page on my website to signal boost people who are looking for work. You can find it at /signalboost. If you want to be added to it, please open a GitHub issue, contact me, or open a pull request to signalboost.dhall in the root.

The schema of this is simple:

Person::{ Name = "Nicole Brennan"
        , Tags = [ "python", "go", "rust", "technical-writing" ]
        , GitLink = "https://github.com/Twi"
        , Twitter = "https://twitter.com/TwitterAccountNameHere"

This will create a grid entry on the site that looks like this:

Nicole Brennan

python go rust technical-writing

GitHub - Twitter

I've also changed my footer to point to this page for the forseeable future instead of linking to my Patreon. Thank you for reading this and please take a look at the people on /signalboost.

Be well.

How I Start: Rust

Permalink - Posted on 2020-03-15 00:00

How I Start: Rust

Rust is an exciting new programming language that makes it easy to make understandable and reliable software. It is made by Mozilla and is used by Amazon, Google, Microsoft and many other large companies.

Rust has a reputation of being difficult because it makes no effort to hide what is going on. I'd like to show you how I start with Rust projects. Let's make a small HTTP service using Rocket.

  • Setting up your environment
  • A new project
  • Testing
  • Adding functionality
  • OpenAPI specifications
  • Error responses
  • Shipping it in a docker image

Setting up your environment

The first step is to install the Rust compiler. You can use any method you like, but since we are requiring the nightly version of Rust for this project, I suggest using rustup:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- --default-toolchain nightly

If you are using NixOS or another Linux distribution with Nix installed, see this post for some information on how to set up the Rust compiler.

A new project

Rocket is a popular web framework for Rust programs. Let's use that to create a small "hello, world" server. We will need to do the following:

  • Create the new Rust project
  • Add Rocket as a dependency
  • Write the hello world route
  • Test a build of the service with cargo build
  • Run it and see what happens

Create the new Rust project

Create the new Rust project with cargo init:

$ cargo init --vcs git .
     Created binary (application) package

This will create the directory src and a file named Cargo.toml. Rust code goes in src and the Cargo.toml file configures dependencies. Adding the --vcs git flag also has cargo create a gitignore file so that the target folder isn't tracked by git.

Add Rocket as a dependency

Open Cargo.toml and add the following to it:

rocket = "0.4.4"

Then download/build Rocket with cargo build:

$ cargo build

This will download all of the dependencies you need and precompile Rocket, and it will help speed up later builds.

Write our "hello world" route

Now put the following in src/main.rs:

#![feature(proc_macro_hygiene, decl_macro)] // Nightly-only language features needed by Rocket

// Import the rocket macros
extern crate rocket;

/// Create route / that returns "Hello, world!"
fn index() -> &'static str {
    "Hello, world!"

fn main() {
    rocket::ignite().mount("/", routes![index]).launch();

Test a build

Rerun cargo build:

$ cargo build

This will create the binary at target/debug/helloworld. Let's run it locally and see if it works:

$ ./target/debug/helloworld

And in another terminal window:

$ curl
Hello, world!
$ fg
<press control-c>

The HTTP service works. We have a binary that is created with the Rust compiler. This binary will be available at ./target/debug/helloworld. However, it could use some tests.


Rocket has support for unit testing built in. Let's create a tests module and verify this route in testing.

Create a tests module

Rust allows you to nest modules within files using the mod keyword. Create a tests module that will only build when testing is requested:

#[cfg(test)] // Only compile this when unit testing is requested
mod tests {
  use super::*; // Modules are their own scope, so you 
                // need to explictly use the stuff in
                // the parent module.
  use rocket::http::Status;
  use rocket::local::*;
  fn test_index() {
    // create the rocket instance to test
    let rkt = rocket::ignite().mount("/", routes![index]);
    // create a HTTP client bound to this rocket instance
    let client = Client::new(rkt).expect("valid rocket");
    // get a HTTP response
    let mut response = client.get("/").dispatch();
    // Ensure it returns HTTP 200
    assert_eq!(response.status(), Status::Ok);
    // Ensure the body is what we expect it to be
    assert_eq!(response.body_string(), Some("Hello, world!".into()));

Run tests

cargo test is used to run tests in Rust. Let's run it:

$ cargo test
   Compiling helloworld v0.1.0 (/home/cadey/code/helloworld)
    Finished test [unoptimized + debuginfo] target(s) in 1.80s
     Running target/debug/deps/helloworld-49d1bd4d4f816617

running 1 test
test tests::test_index ... ok

Adding functionality

Most HTTP services return JSON or JavaScript Object Notation as a way to pass objects between computer programs. Let's use Rocket's JSON support to add a /hostinfo route to this app that returns some simple information:

  • the hostname of the computer serving the response
  • the process ID of the HTTP service
  • the uptime of the system in seconds

Encoding things to JSON

For encoding things to JSON, we will be using serde. We will need to add serde as a dependency. Open Cargo.toml and put the following lines in it:

serde_json = "1.0"
serde = { version = "1.0", features = ["derive"] }

This lets us use #[derive(Serialize, Deserialize)] on our Rust structs, which will allow us to automate away the JSON generation code at compile time. For more information about derivation in Rust, see here.

Let's define the data we will send back to the client using a struct.

use serde::*;

/// Host information structure returned at /hostinfo
#[derive(Serialize, Debug)]
struct HostInfo {
  hostname: String,
  pid: u32,
  uptime: u64,

To implement this call, we will need another few dependencies in the Cargo.toml file. We will use gethostname to get the hostname of the machine and psutil to get the uptime of the machine. Put the following below the serde dependency line:

gethostname = "0.2.1"
psutil = "3.0.1"

Finally, we will need to enable Rocket's JSON support. Put the following at the end of your Cargo.toml file:

version = "0.4.4"
default-features = false
features = ["json"]

Now we can implement the /hostinfo route:

/// Create route /hostinfo that returns information about the host serving this
/// page.
fn hostinfo() -> Json<HostInfo> {
  // gets the current machine hostname or "unknown" if the hostname doesn't
  // parse into UTF-8 (very unlikely)
  let hostname = gethostname::gethostname()
    .or(|_| "unknown".to_string())
    hostname: hostname,
    pid: std::process::id(),
    uptime: psutil::host::uptime()
      .unwrap() // normally this is a bad idea, but this code is
                // very unlikely to fail.

And then register it in the main function:

fn main() {
    .mount("/", routes![index, hostinfo])

Now rebuild the project and run the server:

$ cargo build
$ ./target/debug/helloworld

And in another terminal test it with curl:

$ curl

You can use a similar process for any kind of other route.

OpenAPI specifications

OpenAPI is a common specification format for describing API routes. This allows users of the API to automatically generate valid clients for them. Writing these by hand can be tedious, so let's pass that work off to the compiler using okapi.

Add the following line to your Cargo.toml file in the [dependencies] block:

rocket_okapi = "0.3.6"
schemars = "0.6"
okapi = { version = "0.3", features = ["derive_json_schema"] }

This will allow us to generate OpenAPI specifications from Rocket routes and the types in them. Let's import the rocket_okapi macros and use them:

// Import OpenAPI macros
extern crate rocket_okapi;

use rocket_okapi::JsonSchema;

We need to add JSON schema generation abilities to HostInfo. Change:

#[derive(Serialize, Debug)]


#[derive(Serialize, JsonSchema, Debug)]

to generate the OpenAPI code for our type.

Next we can add the /hostinfo route to the OpenAPI schema:

/// Create route /hostinfo that returns information about the host serving this
/// page.
fn hostinfo() -> Json<HostInfo> {
  // ...

Also add the index route to the OpenAPI schema:

/// Create route / that returns "Hello, world!"
fn index() -> &'static str {
    "Hello, world!"

And finally update the main function to use openapi:

fn main() {
    .mount("/", routes_with_openapi![index, hostinfo])

Then rebuild it and run the server:

$ cargo build
$ ./target/debug/helloworld

And then in another terminal:

$ curl

This should return a large JSON object that describes all of the HTTP routes and the data they return. To see this visually, change main to this:

use rocket_okapi::swagger_ui::{make_swagger_ui, SwaggerUIConfig};

fn main() {
        .mount("/", routes_with_openapi![index, hostinfo])
            make_swagger_ui(&SwaggerUIConfig {
                url: Some("../openapi.json".to_owned()),
                urls: None,

Then rebuild and run the service:

$ cargo build
$ ./target/debug/helloworld

And open the swagger UI in your favorite browser. This will show you a graphical display of all of the routes and the data types in your service. For an example, see here.

Error responses

Earlier in the /hostinfo route we glossed over error handling. Let's correct this using the okapi error type. Let's use the OpenAPIError type in the helloworld function:

/// Create route /hostinfo that returns information about the host serving
/// this page.
fn hostinfo() -> Result<Json<HostInfo>> {
    match gethostname::gethostname().into_string() {
        Ok(hostname) => Ok(Json(HostInfo {
            hostname: hostname,
            pid: std::process::id(),
            uptime: psutil::host::uptime().unwrap().as_secs(),
        Err(_) => Err(OpenApiError::new(format!(
            "hostname does not parse as UTF-8"

When the into_string operation fails (because the hostname is somehow invalid UTF-8), this will result in a non-200 response with the "hostname does not parse as UTF-8" message.

Shipping it in a docker image

Many deployment systems use [Docker][docker] to describe a program's environment and dependencies. Create a Dockerfile with the following contents:

# Use the minimal image
FROM rustlang/rust:nightly-slim AS build

# Where we will build the program
WORKDIR /src/helloworld

# Copy source code into the container
COPY . .

# Build the program in release mode
RUN cargo build --release

# Create the runtime image
FROM ubuntu:18.04

# Copy the compiled service binary
COPY --from=build /src/helloworld/target/release/helloworld /usr/local/bin/helloworld

# Start the helloworld service on container boot
CMD ["usr/local/bin/helloworld"]

And then build it:

$ docker build -t xena/helloworld .

And then run it:

$ docker run --rm -itp 8000:8000 xena/helloworld

And in another terminal:

$ curl
Hello, world!

From here you can do whatever you want with this service. You can deploy it to Kubernetes with a manifest that would look something like this.

This is how I start a new Rust project. I put all of the code described in this post in this GitHub repo in case it helps. Have fun and be well.

For some "extra credit" tasks, try and see if you can do the following:

Many thanks to Coleman McFarland for proofreading this post.

How I Start: Nix

Permalink - Posted on 2020-03-08 00:00

How I Start: Nix

Nix is a tool that helps people create reproducible builds. This means that given a known input, you can get the same output on other machines. Let's build and deploy a small Rust service with Nix. This will not require the Rust compiler to be installed with rustup or similar.

  • Setting up your environment
  • A new project
  • Setting up the Rust compiler
  • Serving HTTP
  • A simple package build
  • Shipping it in a docker image

Setting up your environment

The first step is to install Nix. If you are using a Linux machine, run this script:

$ curl https://nixos.org/nix/install | sh

This will prompt you for more information as it goes on, so be sure to follow the instructions carefully. Once it is done, close and re-open your shell. After you have done this, nix-env should exist in your shell. Try to run it:

$ nix-env
error: no operation specified
Try 'nix-env --help' for more information.

Let's install a few other tools to help us with development. First, let's install lorri to help us manage our development shell:

$ nix-env --install --file https://github.com/target/lorri/archive/master.tar.gz

This will automatically download and build lorri for your system based on the latest possible version. Once that is done, open another shell window (the lorri docs include ways to do this more persistently, but this will work for now) and run:

$ lorri daemon

Now go back to your main shell window and install direnv:

$ nix-env --install direnv

Next, follow the shell setup needed for your shell. I personally use fish with oh my fish, so I would run this:

$ omf install direnv

Finally, let's install niv to help us handle dependencies for the project. This will allow us to make sure that our builds pin everything to a specific set of versions, including operating system packages.

$ nix-env --install niv

Now that we have all of the tools we will need installed, let's create the project.

A new project

Go to your favorite place to put code and make a new folder. I personally prefer ~/code, so I will be using that here:

$ cd ~/code
$ mkdir helloworld
$ cd helloworld

Let's set up the basic skeleton of the project. First, initialize niv:

$ niv init

This will add the latest versions of niv itself and the packages used for the system to nix/sources.json. This will allow us to pin exact versions so the environment is as predictable as possible. Sometimes the versions of software in the pinned nixpkgs are too old. If this happens, you can update to the "unstable" branch of nixpkgs with this command:

$ niv update nixpkgs -b nixpkgs-unstable

Next, set up lorri using lorri init:

$ lorri init

This will create shell.nix and .envrc. shell.nix will be where we define the development environment for this service. .envrc is used to tell direnv what it needs to do. Let's try and activate the .envrc:

$ cd .
direnv: error /home/cadey/code/helloworld/.envrc is blocked. Run `direnv allow`
to approve its content

Let's review its content:

$ cat .envrc
eval "$(lorri direnv)"

This seems reasonable, so approve it with direnv allow like the error message suggests:

$ direnv allow

Now let's customize the shell.nix file to use our pinned version of nixpkgs. Currently, it looks something like this:

# shell.nix
  pkgs = import <nixpkgs> {};
pkgs.mkShell {
  buildInputs = [

This currently imports nixpkgs from the system-level version of it. This means that different systems could have different versions of nixpkgs on it, and that could make the shell.nix file hard to reproduce between machines. Let's import the pinned version of nixpkgs that niv created:

# shell.nix
  sources = import ./nix/sources.nix;
  pkgs = import sources.nixpkgs {};
pkgs.mkShell {
  buildInputs = [

And then let's test it with lorri shell:

$ lorri shell
lorri: building environment........ done
(lorri) $

And let's see if hello is available inside the shell:

(lorri) $ hello
Hello, world!

You can set environment variables inside the shell.nix file. Do so like this:

# shell.nix
  sources = import ./nix/sources.nix;
  pkgs = import sources.nixpkgs {};
pkgs.mkShell {
  buildInputs = [
  # Environment variables

Wait a moment for lorri to finish rebuilding the development environment and then let's see if the environment variable shows up:

$ cd .
direnv: loading ~/code/helloworld/.envrc
<output snipped>
$ echo $HELLO

Now that we have the basics of the environment set up, lets install the Rust compiler.

Setting up the Rust compiler

First, add nixpkgs-mozilla to niv:

$ niv add mozilla/nixpkgs-mozilla

Then create nix/rust.nix in your repo:

# nix/rust.nix
{ sources ? import ./sources.nix }:

  pkgs =
    import sources.nixpkgs { overlays = [ (import sources.nixpkgs-mozilla) ]; };
  channel = "nightly";
  date = "2020-03-08";
  targets = [ ];
  chan = pkgs.rustChannelOfTargets channel date targets;
in chan

This creates a nix function that takes in the pre-imported list of sources, creates a copy of nixpkgs with Rust at the nightly version 2020-03-08 overlaid into it, and exposes the rust package out of it. Let's add this to shell.nix:

# shell.nix
  sources = import ./nix/sources.nix;
  rust = import ./nix/rust.nix { inherit sources; };
  pkgs = import sources.nixpkgs { };
pkgs.mkShell {
  buildInputs = [

Then ask lorri to recreate the development environment. This may take a bit to run because it's setting up everything the Rust compiler requires to run.

$ lorri shell
(lorri) $

Let's see what version of Rust is installed:

(lorri) $ rustc --version
rustc 1.43.0-nightly (823ff8cf1 2020-03-07)

This is exactly what we expect. Rust nightly versions get released with the date of the previous day in them. To be extra sure, let's see what the shell thinks rustc resolves to:

(lorri) $ which rustc

And now exit that shell and reload direnv:

(lorri) $ exit
$ cd .
direnv: loading ~/code/helloworld/.envrc
$ which rustc

And now we have Rust installed at an arbitrary nightly version for that project only. This will work on other machines too. Now that we have our development environment set up, let's serve HTTP.

Serving HTTP

Rocket is a popular web framework for Rust programs. Let's use that to create a small "hello, world" server. We will need to do the following:

  • Create the new Rust project
  • Add Rocket as a dependency
  • Write our "hello world" route
  • Test a build of the service with cargo build

Create the new Rust project

Create the new Rust project with cargo init:

$ cargo init --vcs git .
     Created binary (application) package

This will create the directory src and a file named Cargo.toml. Rust code goes in src and the Cargo.toml file configures dependencies. Adding the --vcs git flag also has cargo create a gitignore file so that the target folder isn't tracked by git.

Add Rocket as a dependency

Open Cargo.toml and add the following to it:

rocket = "0.4.3"

Then download/build Rocket with cargo build:

$ cargo build

This will download all of the dependencies you need and precompile Rocket, and it will help speed up later builds.

Write our "hello world" route

Now put the following in src/main.rs:

#![feature(proc_macro_hygiene, decl_macro)] // language features needed by Rocket

// Import the rocket macros
extern crate rocket;

// Create route / that returns "Hello, world!"
fn index() -> &'static str {
    "Hello, world!"

fn main() {
    rocket::ignite().mount("/", routes![index]).launch();

Test a build

Rerun cargo build:

$ cargo build

This will create the binary at target/debug/helloworld. Let's run it locally and see if it works:

$ ./target/debug/helloworld &
$ curl
Hello, world!
$ fg
<press control-c>

The HTTP service works. We have a binary that is created with the Rust compiler Nix installed.

A simple package build

Now that we have the HTTP service working, let's put it inside a nix package. We will need to use naersk to do this. Add naersk to your project with niv:

$ niv add nmattia/naersk

Now let's create helloworld.nix:

# import niv sources and the pinned nixpkgs
{ sources ? import ./nix/sources.nix, pkgs ? import sources.nixpkgs { }}:
  # import rust compiler
  rust = import ./nix/rust.nix { inherit sources; };
  # configure naersk to use our pinned rust compiler
  naersk = pkgs.callPackage sources.naersk {
    rustc = rust;
    cargo = rust;
  # tell nix-build to ignore the `target` directory
  src = builtins.filterSource
    (path: type: type != "directory" || builtins.baseNameOf path != "target")
in naersk.buildPackage {
  inherit src;
  remapPathPrefix =
    true; # remove nix store references for a smaller output package

And then build it with nix-build:

$ nix-build helloworld.nix

This can take a bit to run, but it will do the following things:

  • Download naersk
  • Download every Rust crate your HTTP service depends on into the Nix store
  • Run your program's tests
  • Build your dependencies into a Nix package
  • Build your program with those dependencies
  • Place a link to the result at ./result

Once it is done, let's take a look at the result:

$ du -hs ./result/bin/helloworld
2.1M    ./result/bin/helloworld

$ ldd ./result/bin/helloworld
        linux-vdso.so.1 (0x00007fffae080000)
        libdl.so.2 => /nix/store/wx1vk75bpdr65g6xwxbj4rw0pk04v5j3-glibc-2.27/lib/libdl.so.2 (0x0
        librt.so.1 => /nix/store/wx1vk75bpdr65g6xwxbj4rw0pk04v5j3-glibc-2.27/lib/librt.so.1 (0x0
        libpthread.so.0 => /nix/store/wx1vk75bpdr65g6xwxbj4rw0pk04v5j3-glibc-2.27/lib/libpthread
.so.0 (0x00007f3a0163b000)
        libgcc_s.so.1 => /nix/store/wx1vk75bpdr65g6xwxbj4rw0pk04v5j3-glibc-2.27/lib/libgcc_s.so.
1 (0x00007f3a013f5000)
        libc.so.6 => /nix/store/wx1vk75bpdr65g6xwxbj4rw0pk04v5j3-glibc-2.27/lib/libc.so.6 (0x000
        /nix/store/wx1vk75bpdr65g6xwxbj4rw0pk04v5j3-glibc-2.27/lib/ld-linux-x86-64.so.2 => /lib6
4/ld-linux-x86-64.so.2 (0x00007f3a0160b000)
        libm.so.6 => /nix/store/wx1vk75bpdr65g6xwxbj4rw0pk04v5j3-glibc-2.27/lib/libm.so.6 (0x000

This means that the Nix build created a 2.1 megabyte binary that only depends on glibc, the implementation of the C language standard library that Nix prefers.

For repo cleanliness, add the result link to the gitignore:

$ echo 'result*' >> .gitignore

Shipping it in a Docker image

Now that we have a package built, let's ship it in a docker image. nixpkgs provides dockerTools which helps us create docker images out of Nix packages. Let's create default.nix with the following contents:

{ system ? builtins.currentSystem }:

  sources = import ./nix/sources.nix;
  pkgs = import sources.nixpkgs { };
  helloworld = import ./helloworld.nix { inherit sources pkgs; };

  name = "xena/helloworld";
  tag = "latest";

in pkgs.dockerTools.buildLayeredImage {
  inherit name tag;
  contents = [ helloworld ];

  config = {
    Cmd = [ "/bin/helloworld" ];
    Env = [ "ROCKET_PORT=5000" ];
    WorkingDir = "/";

And then build it with nix-build:

$ nix-build default.nix

This will create a tarball containing the docker image information as the result of the Nix build. Load it into docker using docker load:

$ docker load -i result

And then run it using docker run:

$ docker run --rm -itp 52340:5000 xena/helloworld

Now test it using curl:

$ curl
Hello, world!

And now you have a docker image you can run wherever you want. The buildLayeredImage function used in default.nix also makes Nix put each dependency of the package into its own docker layer. This makes new versions of your program very efficient to upgrade on your clusters, realistically this reduces the amount of data needed for new versions of the program down to what changed. If nothing but some resources in their own package were changed, only those packages get downloaded.

This is how I start a new project with Nix. I put all of the code described in this post in this GitHub repo in case it helps. Have fun and be well.

For some "extra credit" tasks, try and see if you can do the following:

  • Use the version of niv that niv pinned
  • Customize the environment of the container by following the Rocket configuration documentation
  • Add some more routes to the program
  • Read the Nix documentation and learn more about writing Nix expressions
  • Configure your editor/IDE to use the direnv path

New Site Feature: Patron Thanks Page

Permalink - Posted on 2020-02-29 00:00

New Site Feature: Patron Thanks Page

I've added a patron thanks page to my site. I've been getting a significant amount of money per month from my patrons and I feel this is a good way to acknowledge them and thank them for their patronage. I wanted to have it be as simple as possible, so I made it fetch a list of dollar amounts.

Here are some things I learned while writing this:

  • If you are going to interact with the patreon API in go, use github.com/mxpv/patreon-go, not gopkg.in/mxpv/patreon-go.v1 or gopkg.in/mxpv/patreon-go.v2. The packages on gopkg.in are NOT compatible with Go modules in very bizarre ways.
  • When using refresh tokens in OAuth2, do not set the expiry date to be negative like the patreon-go examples show. This will brick your token and make you have to reprovision it.
  • Patreon clients can either be for API version 1 or API version 2. There is no way to have a Patreon token that works for both API versions.
  • The patreon-go package only supports API version 1 and doesn't document this anywhere.
  • Patreon's error messages are vague and not helpful when trying to figure out that you broke your token with a negative expiry date.
  • I may need to set the Patreon information every month for the rest of the time I maintain this site code. This could get odd. I made a guide for myself in the docs folder of the site repo.
  • The Patreon API doesn't let you submit new posts. I wanted to add Patreon to my syndication server, but apparently that's impossible. My RSS feed, Atom feed and JSON feed should let you keep up to date in the meantime.

Let me know how you like this. I went back and forth on displaying monetary amounts on that page, but ultimately decided not to show them there for confidentiality reasons. If this is a bad idea, please let me know and I can put the money amounts back.

I'm working on a more detailed post about pa'i that includes benchmarks for some artificial and realistic workloads. I'm also working on integrating it into the wamscloud prototype, but it's fairly slow going at the moment.

Be well.

pa'i: hello world!

Permalink - Posted on 2020-02-22 00:00

pa'i: hello world!

It's been a while since I gave an update on the Olin ecosystem (which now exists, apparently). Not much has really gone on with it for the last few months. However, recently I've decided to tackle one of the core problems of Olin's implementation in Go: execution speed.

Originally I was going to try and handle this with "hyperjit", but support for linking C++ programs into Go is always questionable at best. All of the WebAssembly compiling and running tooling has been written in Rust, and as far as I know I was the only holdout still using Go. This left me kinda stranded and on my own, seeing as the libraries that I was using were starting to die.

I have been following the wasmer project for a while and thanks to their recent custom ABI sample, I was able to start re-implementing the Olin API in it. Wasmer uses a JIT for handling WebAssembly, so I'm able to completely destroy the original Go implementation in terms of performance. I call this newer, faster runtime pa'i (/pa.hi/, paw-hee), which is a Lojban rafsi for the word prami which means love.

pa'i is written in Rust. It is built with Nix. It requires a nightly version of Rust because the WebAssembly code it compiles requires it. However, because it is built with Nix, this quickly becomes a non-issue. You can build pa'i by doing the following:

$ git clone git@github.com:Xe/pahi
$ cd pahi
$ nix-build

and then nix-build will take care of:

  • downloading the pinned nightly version of the rust compiler
  • building the reference Olin interpreter
  • building the pa'i runtime
  • building a small suite of sample programs
  • building the documentation from dhall files
  • building a small test runner

If you want to try this out in a more predictable environment, you can also nix-build docker.nix. This will create a Docker image as the result of the Nix build. This docker image includes the pa'i composite package, bash, coreutils and dhall-to-json (which is required by the test runner).

I'm actually really proud of how the documentation generation works. The cwa-spec folder in Olin was done very ad-hoc and was only consistent because there was a template. This time functions, types, errors, namespaces and the underlying WebAssembly types they boil down to are all implemented as Dhall records. For example, here's the definition of a namespace in Dhall:

let func = ./func.dhall

in  { Type = { name : Text, desc : Text, funcs : List func.Type }
    , default =
        { name = "unknown"
        , desc = "please fill in the desc field"
        , funcs = [] : List func.Type

which gets rendered to Markdown using renderNSToMD.dhall:

let ns = ./ns.dhall

let func = ./func.dhall

let type = ./type.dhall

let showFunc = ./renderFuncToMD.dhall

let Prelude = ../Prelude.dhall

let toList = Prelude.Text.concatMapSep "\n" func.Type showFunc

let show
    : ns.Type → Text
    =   λ(namespace : ns.Type)
      → ''
        # ${namespace.name}


        ${toList namespace.funcs}

in  show

This would render the logging namespace as this markdown.

It seems like overkill to document things like this (and at some level it is), but I plan to take advantage of this later when I need to do things like generate C/Rust/Go/TinyGo bindings for the entire specification at once. I also have always wanted to document something so precisely like this, and now I get the chance.

pa'i is just over a week old at this point, and as such it is NOT feature-complete with the reference Olin interpreter. I'm working on it though. I'm kinda burnt out from work, and even though working on this project helps me relax (don't ask me how, I don't understand either) I have limits and will take this slowly and carefully to ensure that it stays compatible with all of the code I have already written in Olin's repo. Thanks to go-flag, I might actually be able to get it mostly flag-compatible. We'll see though.

I have also designed a placeholder logo for pa'i. Here it is:

the logo for pa'i

It might be changed in the future, but this is what I am going with for now. The circuit traces all spell out messages of love (inspired from the Senzar runes of the WingMakers). The text on top of the microprocessor reads pa'i in zbalermorna, a constructed writing script for Lojban. The text on the side probably needs to be revised, but it says something along the lines of "a future after programs".

pa'i is chugging along. When I have closed the compatibility todo list for all of the Olin API calls, I'll write more. For now, pa'i is a very complicated tool that lets you print "Hello, world" in new and exciting ways (this will change once I get resource calls into it), but it's getting there.

I hope this was interesting. Be well.

Why Rust

Permalink - Posted on 2020-02-15 00:00

Why Rust

Or: A Trip Report from my Satori with Rust and Functional Programming

Software is a very odd field to work in. It is simultaneously an abstract and physical one. You build systems that can deal with an unfathomable amount of input and output at the same time. As a job, I peer into the madness of an unthinking automaton and give order to the inherent chaos. I then emit incantations to describe what this unthinking automaton should do in my stead. I cannot possibly track the relations between a hundred thousand transactions going on in real time, much less file them appropriately so they can be summoned back should the need arise.

However, this incantation (by necessity) is an unthinkably precise and fickle beast. It's almost as if you are training a four-year old to go to the store, but doing it by having them read a grocery list. This grocery list has to be precise enough that the four year old ends up getting what you want and not a cart full of frosted flakes and candy bars. But, at the same time, the four year old needs to understand it. Thus, the precision.

There's many schools of thought around ways to write the grocery list. Some follow a radically simple approach, relying on the toddler to figure things out at the store. Sometimes this simpler approach doesn't work out in more obscure scenarios, like when they are out of red grapes but do have green grapes, but it tends to work out enough. Proponents of these list-making tools also will advocate for doing full tests of the grocery list before they send the toddler off to the store. This means setting up a fake grocery store with funny money, a fake card, plastic food, the whole nine yards. This can get expensive and can become a logistical issue (where are you going to store all that plastic fruit in a way that you can just set up and tear down the grocery store mock so quickly?).

Another school of thought is that the process of writing the grocery list should be done in a way that prevents ambiguity at the grocery store. This kind of flow uses some more advanced concepts like the ability to describe something by its attributes. For example, this could specify the difference between fruit and vegetables, and only allow fruit to be put in one place of the cart and only allow vegetables to be placed in the other. And if the writer of the list tries to violate this, the list gets rejected and isn't used at all.

There is yet another school of thought that decides that the exact spatial position of the toddler relative to everything else should be thought of in advance, along with a process to make sure that nothing is done in an improper way. This means writing the list can be a lot harder at first, but it's much less likely to result in the toddler coming back with a weird state. Consider what happens if two items show up at the same time and the toddler tries to grab both of them at the same time due to the instructions in the list! They only have one arm to grab things with, so it just doesn't work. Proponents of the more strict methods have reference cells and other mechanisms to ensure that the toddler can only ever grab one thing at a time.

If we were to match these three ludicrous examples to programming languages, the first would be Lua, the second would be Go and the third would be something like Haskell or Rust. Software development is a complicated process because the problems involved with directing that unthinking automaton to do what you want are hard. There is a lot going on, much in the same way there is a lot going on when you send a toddler to do your grocery shopping for you.

A good way to look at the tradeoffs involved is to see things as a balance between two forces, pragmatism and correctness. Languages that are more pragmatic are easier to develop in, but are mathematically more likely to run into problems at runtime. Languages that are more correct take more investment to write up front, but over time the correctness means that there's fewer failed assumptions about what is going on. The compiler stops you from doing things that don't make sense to it. This means that it's difficult to literally impossible to create a bad state at runtime.

Tools like Lua and Go can (and have) been used to develop stable and viable software. itch.io is written in Lua running on top of nginx and it handles financial transactions well enough that it's turned into the guy's full time job. Google uses Go everywhere in their stack, and it's been used to create powerful tools like Kubernetes, Caddy, and Docker. These tools are trusted implicitly by a generation of developers, even though the language itself has its flaws. If you are reading this blog in Firefox, statistically there is Rust involved in the rendering and viewing of this post. Rust is built for ensuring that code is as correct as possible, even if it means eating into development time to ensure that.

In Rust, you don't have to memorize rules about how and when it is safe to update data in structures, because the compiler ensures you cannot mess it up by rejecting the code if you could be messing it up. You don't have to run your tests with a race detector or figure out how to expose that in production to trace down that obscure double-write to a non-threadsafe hashmap, because in Rust there is no such thing as a non-threadsafe hashmap. There is only a safe hashmap and only can ever be a safe hashmap.

As an absurd example, consider the following two snippets of code, one in Go and one in Rust, both of them will put integers into a standard library list and then print them all out:

l := list.New()           // () -> *list.List
for i := 0; i < 5; i++ {
  l.PushBack(i)           // interface{} -> ()

for e := l.Front(); e != nil; e = e.Next() {
  log.Printf("%T: %v", e.Value, e.Value)
let mut vec = Vec::new::<i64>(); // () -> Vec<i64>

for i in 0..5 {
  vec.push(i as i64);            // (mut Vec<i64>, i64) -> ()

for i in vec.iter() {
  println!("{}", i);

The Go version uses interface{} as the data element because Go literally cannot describe types as parameters to functions. The Rust version took me a bit longer to write, but there is no ambiguity as to what the vector holds. The Go version can also hold multiple types of data in the same list, a-la:

l := list.New()

All of which is valid because in Go, an interface{} matches every kind of value possible. An integer is an interface{}. A floating-point number is an interface{}. A string is an interface{}. A bool is an interface{}. Any custom type you create is an interface{}. Normally, this would be very restrictive and make it difficult to do things like JSON parsing. However the Go runtime lets you hack around this with reflection.

This allows the standard library to handle things like JSON parsing with functions that look like this:

func Unmarshal(data []byte, v interface{}) error

There's even a set of complicated rules you need to memorize about how to trick the JSON parser into massaging your data into place. This lets you do things like this:

type Rilkef struct {
  Foo        string `json:"foo"`
  CallToArms string `json:"call_to_arms"`

This allows the programmer a lot of flexibility while developing and compiling the code. It's very easy for the compiler to say "oh, hey, that could be anything, and you gave it some kind of anything, sounds legit to me", but then the job of ensuring the sanity of the inputs is shunted to runtime rather than stopped before the code gets deployed. This means you need to test the code in order to see how it behaves, making sure that the standard library is doing its job correctly. This kind of stuff does not happen in Rust.

The Rust version of this JSON example uses the serde and serde_json libraries:

use serde::*;

#[derive(Serialize, Deserialize)]
pub struct Rilkef {
  pub foo: String,
  pub call_to_arms: String,

And the logic for handling the correct rules for serialization and deserialization is handled at compile time by the compiler itself. Serde also allows you to support more than just JSON, so this same type can be reused for Dhall, YAML or whatever you could imagine.


Rust allows for more correctness at the cost of developer efficiency. This is a tradeoff, but I think it may actually be worth it. Code that is more correct is more robust and less prone to failure than code that is less correct. This leads to software that is less likely to crash at 3 am and wake you up due to a preventable developer error.

After working in Go for more than half a decade, I'm starting to think that it is probably a better idea to impact developer velocity and force them to write software that is more correct. Go works if you are careful about how you handle it. It however amounts to a giant list of rules that you just have to know (like maps not being threadsafe) and a lot of those rules come from battle rather than from the development process.

This came out as more of a rant than I had thought it would, but overall I hope my point isn't lost.

Things You Might Complain About

Yes, I know slices exist in Go. I wanted to prove a point about how the overuse of interface{} in some relatively core things (like generic lists) can cause headaches in term of correctness. Go will reject you trying to append a string to an integer slice, but you cannot create a type that functions identically to an integer slice.

Go does have a race detector that will point out a lot of sins in concurrent programs, but that is again at runtime, not at compile time.

Many thanks to Tene, Sr. Oracle, A. Wilfox, Byte-slice, SiIvagunner and anyone who watched the stream where I wrote this blogpost. If I got things wrong in this, please reach out to me to let me know what I messed up. This is a composite of a few twitter threads and a conversation I had on IRC.

Thanks for reading, be well.

I was Wrong about Nix

Permalink - Posted on 2020-02-10 00:00

I was Wrong about Nix

From time to time, I am outright wrong on my blog. This is one of those times. In my last post about Nix, I didn't see the light yet. I think I do now, and I'm going to attempt to clarify below.

Let's talk about a more simple scenario: writing a service in Go. This service will depend on at least the following:

  • A Go compiler to build the code into a binary
  • An appropriate runtime to ensure the code will run successfully
  • Any data files needed at runtime

A popular way to model this is with a Dockerfile. Here's the Dockerfile I use for my website (the one you are reading right now):

FROM xena/go:1.13.6 AS build
ENV GOPROXY https://cache.greedo.xeserv.us
COPY . /site
RUN CGO_ENABLED=0 go test -v ./...
RUN CGO_ENABLED=0 GOBIN=/root go install -v ./cmd/site

FROM xena/alpine
COPY --from=build /root/site .
COPY ./static /site/static
COPY ./templates /site/templates
COPY ./blog /site/blog
COPY ./talks /site/talks
COPY ./gallery /site/gallery
COPY ./css /site/css
HEALTHCHECK CMD wget --spider || exit 1
CMD ./site

This fetches the Go compiler from an image I made, copies the source code to the image, builds it (in a way that makes the resulting binary a static executable), and creates the runtime environment for it.

Let's let it build and see how big the result is:

$ docker build -t xena/christinewebsite:example1 .
<output omitted>
$ docker images | grep xena
xena/christinewebsite  example1  4b8ee64969e8  24 seconds ago  111MB

Investigating this image with dive, we see the following:

  • The package manager is included in the image
  • The package manager's database is included in the image
  • An entire copy of the C library is included in the image (even though the binary was statically linked to specifically avoid this)
  • Most of the files in the docker image are unrelated to my website's functionality and are involved with the normal functioning of Linux systems

Granted, Alpine Linux does a good job at keeping this chaff to a minimum, but it is still there, still needs to be updated (causing all of my docker images to be rebuilt and applications to be redeployed) and still takes up space in transfer quotas and on the disk.

Let's compare this to the same build process but done with Nix. My Nix setup is done in a few phases. First I use niv to manage some dependencies a-la git submodules that don't hate you:

$ nix-shell -p niv
[nix-shel]$ niv init
<writes nix/*>

Now I add the tool vgo2nix in niv:

[nix-shell]$ niv add adisbladis/vgo2nix

And I can use it in my shell.nix:

  pkgs = import <nixpkgs> { };
  sources = import ./nix/sources.nix;
  vgo2nix = (import sources.vgo2nix { });
in pkgs.mkShell { buildInputs = [ pkgs.go pkgs.niv vgo2nix ]; }

And then relaunch nix-shell with vgo2nix installed and convert my go modules dependencies to a Nix expression:

$ nix-shell
<some work is done to compile things, etc>
[nix-shell]$ vgo2nix
<writes deps.nix>

Now that I have this, I can follow the buildGoPackage instructions from the upstream nixpkgs documentation and create site.nix:

{ pkgs ? import <nixpkgs> {} }:
with pkgs;

assert lib.versionAtLeast go.version "1.13";

buildGoPackage rec {
  name = "christinewebsite-HEAD";
  version = "latest";
  goPackagePath = "christine.website";
  src = ./.;

  goDeps = ./deps.nix;
  allowGoReference = false;
  preBuild = ''
    export CGO_ENABLED=0
    buildFlagsArray+=(-pkgdir "$TMPDIR")

  postInstall = ''
    cp -rf $src/blog $bin/blog
    cp -rf $src/css $bin/css
    cp -rf $src/gallery $bin/gallery
    cp -rf $src/static $bin/static
    cp -rf $src/talks $bin/talks
    cp -rf $src/templates $bin/templates

And this will do the following:

  • Download all of the needed dependencies and place them in the system-level Nix store so that they are not downloaded again
  • Set the CGO_ENABLED environment variable to 0 so the Go compiler emits a static binary
  • Copy all of the needed files to the right places so that the blog, gallery and talks features can load all of their data
  • Depend on nothing other than a working system at runtime

This Nix build manifest doesn't just work on Linux. It works on my mac too. The dockerfile approach works great for Linux boxes, but (unlike what the me of a decade ago would have hoped) the whole world just doesn't run Linux on their desktops. The real world has multiple OSes and Nix allows me to compensate.

So, now that we have a working cross-platform build, let's see how big it comes out as:

$ readlink ./result-bin
$ du -hs result-bin/
89M     ./result-bin/
$ du -hs result-bin/
11M     ./result-bin/bin
888K    ./result-bin/blog
40K     ./result-bin/css
44K     ./result-bin/gallery
77M     ./result-bin/static
28K     ./result-bin/talks
64K     ./result-bin/templates

As expected, most of the build results are static assets. I have a lot of larger static assets including an entire copy of TempleOS, so this isn't too surprising. Let's compare this to on the mac:

$ du -hs result-bin/
 91M	result-bin/
$ du -hs result-bin/*
 14M	result-bin/bin
872K	result-bin/blog
 36K	result-bin/css
 40K	result-bin/gallery
 77M	result-bin/static
 24K	result-bin/talks
 60K	result-bin/templates

Which is damn-near identical save some macOS specific crud that Go has to deal with.

I mentioned this is used for Docker builds, so let's make docker.nix:

{ system ? builtins.currentSystem }:

  pkgs = import <nixpkgs> { inherit system; };

  callPackage = pkgs.lib.callPackageWith pkgs;

  site = callPackage ./site.nix { };

  dockerImage = pkg:
    pkgs.dockerTools.buildImage {
      name = "xena/christinewebsite";
      tag = pkg.version;

      contents = [ pkg ];

      config = {
        Cmd = [ "/bin/site" ];
        WorkingDir = "/";

in dockerImage site

And then build it:

$ nix-build docker.nix
<output omitted>
$ docker load -i result
c6b1d6ce7549: Loading layer [==================================================>]  95.81MB/95.81MB
$ docker images | grep xena
xena/christinewebsite  latest  0d1ccd676af8  50 years ago  94.6MB

And the output is 16 megabytes smaller.

The image age might look weird at first, but it's part of the reproducibility Nix offers. The date an image was built is something that can change with time and is actually a part of the resulting file. This means that an image built one second after another has a different cryptographic hash. It helpfully pins all images to Unix timestamp 0, which just happens to be about 50 years ago.

Looking into the image with dive, the only packages installed into this image are:

  • The website and all of its static content goodness
  • IANA portmaps that Go depends on as part of the net package
  • The standard list of [MIME types][mimetypes] that the net/http package needs
  • Time zone data that the time package needs

And that's it. This is fantastic. Nearly all of the disk usage has been eliminated. If someone manages to trick my website into executing code, that attacker cannot do anything but run more copies of my website (that will immediately fail and die because the port is already allocated).

This strategy pans out to more complicated projects too. Consider a case where a frontend and backend need to be built and deployed as a unit. Let's create a new setup using niv:

$ niv init

Since we are using Elm for this complicated project, let's add the elm2nix tool so that our Elm dependencies have repeatable builds, and gruvbox-css for some nice simple CSS:

$ niv add cachix/elm2nix
$ niv add Xe/gruvbox-css

And then add it to our shell.nix:

  pkgs = import <nixpkgs> {};
  sources = import ./nix/sources.nix;
  elm2nix = (import sources.elm2nix { });
pkgs.mkShell {
  buildInputs = [

And then enter nix-shell to create the Elm boilerplate:

$ nix-shell
[nix-shell]$ cd frontend
[nix-shell:frontend]$ elm2nix init > default.nix
[nix-shell:frontend]$ elm2nix convert > elm-srcs.nix
[nix-shell:frontend]$ elm2nix snapshot

And then we can edit the generated Nix expression:

  sources = import ./nix/sources.nix;
  gcss = (import sources.gruvbox-css { });
# ...
      buildInputs = [ elmPackages.elm gcss ]
        ++ lib.optional outputJavaScript nodePackages_10_x.uglify-js;
# ...
        cp -rf ${gcss}/gruvbox.css $out/public
        cp -rf $src/public/* $out/public/
# ...
  outputJavaScript = true;

And then test it with nix-build:

$ nix-build
<output omitted>

And now create a name.nix for your Go service like I did above. The real magic comes from the docker.nix file:

{ system ? builtins.currentSystem }:

  pkgs = import <nixpkgs> { inherit system; };
  sources = import ./nix/sources.nix;
  backend = import ./backend.nix { };
  frontend = import ./frontend/default.nix { };

pkgs.dockerTools.buildImage {
  name = "xena/complicatedservice";
  tag = "latest";

  contents = [ backend frontend ];

  config = {
    Cmd = [ "/bin/backend" ];
    WorkingDir = "/public";

Now both your backend and frontend services are built with the dependencies in the Nix store and shipped as a repeatable Docker image.

Sometimes it might be useful to ship the dependencies to a service like Cachix to help speed up builds.

You can install the cachix tool like this:

$ nix-env -iA cachix -f https://cachix.org/api/v1/install

And then follow the steps at cachix.org to create a new binary cache. Let's assume you made a cache named teddybear. When you've created a new cache, logged in with an API token and created a signing key, you can pipe nix-build to the Cachix client like so:

$ nix-build | cachix push teddybear

And other people using that cache will benefit from your premade dependency and binary downloads.

To use the cache somewhere, install the Cachix client and then run the following:

$ cachix use teddybear

I've been able to use my Go, Elm, Rust and Haskell dependencies on other machines using this. It's saved so much extra download time.


I was wrong about Nix. It's actually quite good once you get past the documentation being baroque and hard to read as a beginner. I'm going to try and do what I can to get the documentation improved.

As far as getting started with Nix, I suggest following these posts:

Also, I really suggest trying stuff as a vehicle to understand how things work. I got really far by experimenting with getting this Discord bot I am writing in Rust working in Nix and have been very pleased with how it's turned out. I don't need to use rustup anymore to manage my Rust compiler or the language server. With a combination of direnv and lorri, I can avoid needing to set up language servers or the like at all. I can define them as part of the project environment and then trust the tools I build on top of to take care of that for me.

Give Nix a try. It's worth at least that much in my opinion.

Instant Pot Spaghetti

Permalink - Posted on 2020-02-03 00:00

Instant Pot Spaghetti

This is based on this recipe, but made only with things you can find in Costco. My fiancé and I have made this at least weekly for the last 8 months and we love how it turns out.



  • 1/2 kg ground beef (pre-cooked, or see section on browning it)
  • 3 1/4 cups water
  • 2 teaspoons salt
  • a small amount of pepper
  • 4 heaping teaspoons of garlic
  • 1/2 cup butter
  • 1/4 kg spaghetti noodles
  • 1 jar of pasta sauce (about 870ml)

If you want it to be more spicy, add more pepper. Too much can make it hard to eat. Only experiment with the pepper amount after you've made this and decided there's not enough pepper.


Put the ground beef in the instant pot. Put the water in the instant pot. Put the salt in the instant pot. Put the pepper in the instant pot. Put the garlic in the instant pot. Put the butter in the instant pot.

Stir for about 30 seconds, or until the garlic looks like it's distributed about evenly in the pot.

Take the spaghetti noodles and break them in half. Place about a third of one half one direction, the second third another, and the last yet another. Repeat this for the other half of the pasta. This helps to not clump it together when it's cooking.

Look at the package of spaghetti noodles. It should say something like "Ready in X minutes" with a large number. Take that number and subtract two from it. If you have a pasta that says it's cooked for 7 minutes, you will cook it for 5 minutes. If you have a pasta that says it's cooked for 9 minutes, you will cook it for 7 minutes.

Put the lid on the instant pot, seal it and ensure the pressure release valve is set to "sealing". Hit the "manual" button and select the number you figured out above.

Leave the instant pot alone for 10 minutes after it is done. This lets the pressure release naturally.

Use your serving utensil to open the pressure release valve. Stir and wait 3-5 minutes to serve. This makes 5 servings, but could be extended to more if you carefully ration it.

Serve hot with salt or parmesan cheese.

Browning Ground Beef

Browing ground beef is the act of cooking it all the way through so it is safe to eat. It's called "browing" it because the ground beef will turn a grayish brown when it is fully cooked.


  • Olive oil
  • 1 teaspoon salt
  • The ground beef you want to brown


Take the lid off of the instant pot. Cover the bottom of the pan in olive oil. Sprinkle the salt over the olive oil. Place the ground beef in the instant pot on top of the olive oil and salt.

Press the "sauté" button on your instant pot and use a spatula to break the ground beef into smaller chunks while it warms up. Mix the ground beef while it cooks. The goal is to make sure that all of the red parts turn grayish brown.

This will take anywhere from 5-10 minutes.

If you are using this ground beef for the above spaghetti recipe, you don't need to remove it from the instant pot. You can store extra ground beef in the fridge for use later.

Thoughts on Nix

Permalink - Posted on 2020-01-28 00:00

Thoughts on Nix

EDIT(M02 20 2020): I've written a bit of a rebuttal to my own post here. I am keeping this post up for posterity.

I don't really know how I feel about Nix. It's a functional package manager that's designed to help with dependency hell. It also lets you define packages using Nix, which is an identically named yet separate thing. Nix has untyped expressions that help you build packages like this:

{ stdenv, fetchurl, perl }:

stdenv.mkDerivation {
  name = "hello-2.1.1";
  builder = ./builder.sh;
  src = fetchurl {
    url = ftp://ftp.nluug.nl/pub/gnu/hello/hello-2.1.1.tar.gz;
    sha256 = "1md7jsfd8pa45z73bz1kszpp01yw6x5ljkjk2hx7wl800any6465";
  inherit perl;

In theory, this is great. It's obvious what needs to be done to the system in order for the "hello, world" package and what it depends on (in this case it depends on only the standard environment because there's no additional dependencies specified), to the point that this approach lets you avoid all major forms of DLL hell, while at the same time creating its own form of hell: nixpkgs, or the main package source of Nix.

Now, you may ask, how do you get that hash? Try and build the package with an obviously false hash and use the correct one from the output of the build command! That seems safe!

Let's say you have a modern app that has dependencies with npm, Go and Elm. Let's focus on the Go side for now. How would we do that when using Go modules?

{ pkgs ? import <nixpkgs> { } }:
x = buildGoModule rec {
  name = "Xe-x-${version}";
  version = "1.2.3";

  src = fetchFromGitHub {
    owner = "Xe";
    repo = "x";
    rev = "v${version}";
    sha256 = "0m2fzpqxk7hrbxsgqplkg7h2p7gv6s1miymv3gvw0cz039skag0s";

  modSha256 = "1879j77k96684wi554rkjxydrj8g3hpp0kvxz03sd8dmwr3lh83j"; 

  subPackages = [ "." ]; 

in {
  x = x;

And this will fetch and build the entirety of my x repo into a single massive package that includes everything. Let's say I want to break it up into multiple packages so that I can install only one or two parts of it, such as my license command:

Let's make a function called gomod.nix that includes everything to build the go modules:

# gomod.nix
pkgs: repo: modSha256: attrs:
  with pkgs;
  let defaultAttrs = {
    src = repo;
    modSha256 = modSha256;

  in buildGoModule (defaultAttrs // attrs)

And then let's invoke this with a few of the commands in there:

{ pkgs ? import <nixpkgs> { } }:
  stdenv = pkgs.stdenv;
  version = "1.2.3";
  repo = pkgs.fetchFromGitHub {
    owner = "Xe";
    repo = "x";
    rev = "v${version}";
    sha256 = "0m2fzpqxk7hrbxsgqplkg7h2p7gv6s1miymv3gvw0cz039skag0s";

  modSha256 = "1879j77k96684wi554rkjxydrj8g3hpp0kvxz03sd8dmwr3lh83j";
  mk = import ./gomod.nix pkgs repo modSha256;

  appsluggr = mk {
    name = "appsluggr";
    version = version;
    subPackages = [ "cmd/appsluggr" ];

  johaus = mk {
    name = "johaus";
    version = version;
    subPackages = [ "cmd/johaus" ];

  license = mk {
    name = "license";
    version = version;
    subPackages = [ "cmd/license" ];

  prefix = mk {
    name = "prefix";
    version = version;
    subPackages = [ "cmd/prefix" ];

in {
  appsluggr = appsluggr;
  johaus = johaus;
  license = license;
  prefix = prefix;

And when we build this, we notice that ALL of the dependencies for my x repo (at least a hundred because it's got a lot of stuff in there) are downloaded FOUR TIMES, even though they don't change between them. I could avoid this by making each dependency its own Nix package, but that's not a productive use of my time.

Add on having to do this for the Node dependencies, and the Elm dependencies and this is at least 200 if not more packages needed for my relatively simple CRUD app that has creative choices in technology.

Oh, even better, the build directory isn't writable. So when your third-tier dependency has a generation step that assumes the build directory is writable, you suddenly need to become an expert in how that tool works so you can shunt it writing its files to another place. And then you need to make sure those files don't end up places they shouldn't be, lest you fill your disk with unneeded duplicate node_modules folders that really shouldn't be there in the first place (but are there because you gave up).

Then you need to make sure that works on another machine, because even though Nix itself is "functionally pure" (save the heat generated by the CPU executing your cloud-native, multitenant parallel adding service) this is a PACKAGE MANAGER. You know, the things that handle STATE, like FILES on the DISK. That's STATE. GLOBALLY MUTABLE STATE.

One of the main advantages of this approach is that the library dependencies of every project are easy to reproduce on other machines. Consider the ldd(1) (which shows the dynamic libraries associated with a program) output of ls on my Ubuntu system vs a package I installed from Nix:

$ ldd $(which ls)
        linux-vdso.so.1 (0x00007ffd2a79f000)
        libselinux.so.1 => /lib/x86_64-linux-gnu/libselinux.so.1 (0x00007f00f0e16000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f00f0a25000)
        libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007f00f07b3000)
        libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f00f05af000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f00f1260000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f00f0390000)

All of these dependencies are managed by apt(8) and are supposedly reproducible on other Ubuntu systems. Compare this to the ldd(1) output of a Nix program:

$ ldd $(which dhall)
        linux-vdso.so.1 (0x00007fff0516a000)
        libm.so.6 => /nix/store/aag9d1y4wcddzzrpfmfp9lcmc7skd7jk-glibc-2.27/lib/libm.so.6 (0x00007fc20ed8d000)
        libz.so.1 => /nix/store/a3q9zl42d0hmgwmgzwkxi5qd88055fh8-zlib-1.2.11/lib/libz.so.1 (0x00007fc20ed6e000)
        libncursesw.so.6 => /nix/store/24xdpjcg2bkn2virdabnpncx6f98kgfw-ncurses-6.1-20190112/lib/libncursesw.so.6 (0x00007fc20ec8c000)
        libpthread.so.0 => /nix/store/aag9d1y4wcddzzrpfmfp9lcmc7skd7jk-glibc-2.27/lib/libpthread.so.0 (0x00007fc20ed4d000)
        librt.so.1 => /nix/store/aag9d1y4wcddzzrpfmfp9lcmc7skd7jk-glibc-2.27/lib/librt.so.1 (0x00007fc20ed43000)
        libutil.so.1 => /nix/store/aag9d1y4wcddzzrpfmfp9lcmc7skd7jk-glibc-2.27/lib/libutil.so.1 (0x00007fc20ed3c000)
        libdl.so.2 => /nix/store/aag9d1y4wcddzzrpfmfp9lcmc7skd7jk-glibc-2.27/lib/libdl.so.2 (0x00007fc20ed37000)
        libgmp.so.10 => /nix/store/4gmyxj5blhfbn6c7y3agxczrmsm2bhzv-gmp-6.1.2/lib/libgmp.so.10 (0x00007fc20ebf7000)
        libffi.so.7 => /nix/store/qa8wyi9pckq1d3853sgmcc61gs53g0d3-libffi-3.3/lib/libffi.so.7 (0x00007fc20ed2a000)
        libc.so.6 => /nix/store/aag9d1y4wcddzzrpfmfp9lcmc7skd7jk-glibc-2.27/lib/libc.so.6 (0x00007fc20ea41000)
        /nix/store/aag9d1y4wcddzzrpfmfp9lcmc7skd7jk-glibc-2.27/lib/ld-linux-x86-64.so.2 => /lib64/ld-linux-x86-64.so.2 (0x00007fc20ecfe000)

Each dynamic library dependency has its package hash in the folder path. This also means that the hash of its parent packages are present in there, which root all the way back to where/when its ultimate parent package was built. This makes Nix packages a kind of blockchain.

Nix also allows users to install their own packages into the global nix store at /nix. No, you can't change this, but you can symlink it to another place if you (like me) have a partition setup with / having less disk space than /home. You also need to set a special environment variable so Nix shuts up about you doing this. This is really fun on macOS Catalina where the root filesystem is read only. There is a workaround (that I had to trawl into the depths of Google page cache to get, because of course I did), but the Nix team themselves seem unaware of it.

So, to recap: Nix is an attempt at a radically different approach to package management. It assumes too much about the state of everything and puts odd demands on people as a result. Language-specific package managers can and will fight Nix unless they are explicitly designed to handle Nix's weirdness. As a side effect of making its package management system usable by normal users, it exposes the package manager database to corruption by any user mistake, curl2bash or malicious program on the system. All that functional purity uwu and statelessness can vanish into a puff of logic without warning.

But everything's immutable so that means it's okay right?

Based on this twitter thread but a LOT less sarcastic.

Dhall for Kubernetes

Permalink - Posted on 2020-01-25 00:00

Dhall for Kubernetes

Kubernetes is a surprisingly complicated software package. Arguably, it has to be that complicated as a result of the problems it solves being complicated; but managing yaml configuration files for Kubernetes is a complicated task. YAML doesn't have support for variables or type metadata. This means that the validity (or sensibility) of a given Kubernetes configuration file (or files) isn't easy to figure out without using a Kubernetes server.

In my last post about Kubernetes, I mentioned I had developed a tool named dyson in order to help me manage Terraform as well as create Kubernetes manifests from a template. This works for the majority of my apps, but it is difficult to extend at this point for a few reasons:

  • It assumes that everything passed to it are already valid yaml terms
  • It doesn't assert the type of any values passed to it
  • It is difficult to add another container to a given deployment
  • Environment variables implicitly depend on the presence of a private git repo
  • It depends on the template being correct more than the output being correct

So, this won't scale. People in the community have created other solutions for this like Helm, but a lot of them have some of the same basic problems. Helm also assumes that your template is correct. Kustomize does help with a lot of the type-safe variable replacements, but it doesn't have the ability to ensure your manifest is valid.

I looked around for alternate solutions for a while and eventually found Dhall thanks to a friend. Dhall is a statically typed configuration language. This means that you can ensure that inputs are always the correct type or the configuration file won't load. There's also a built-in dhall-to-yaml tool that can be used with the Kubernetes package in order to declare Kubernetes manifests in a type-safe way.

Here's a small example of Dhall and the yaml it generates:

-- Mastodon usernames
[ { name = "Cadey", mastodon = "@cadey@mst3k.interlinked.me" }
, { name = "Nicole", mastodon = "@sharkgirl@mst3k.interlinked.me" }

Which produces:

- mastodon: "@cadey@mst3k.interlinked.me"
  name: Cadey
- mastodon: "@sharkgirl@mst3k.interlinked.me"
  name: Nicole

Which is fine, but we still have the type-safety problem that you would have in normal yaml. Dhall lets us define record types for this data like this:

let User =
      { Type = { name : Text, mastodon : Optional Text }
      , default = { name = "", mastodon = None }

let users =
      [ User::{ name = "Cadey", mastodon = Some "@cadey@mst3k.interlinked.me" }
      , User::{
        , name = "Nicole"
        , mastodon = Some "@sharkgirl@mst3k.interlinked.me"

in  users

Which produces:

- mastodon: "@cadey@mst3k.interlinked.me"
  name: Cadey
- mastodon: "@sharkgirl@mst3k.interlinked.me"
  name: Nicole

This is type-safe because you cannot add arbitrary fields to User instances without the compiler rejecting it. Let's add an invalid "preferred_language" field to Cadey's instance:

-- ...
let users =
      [ User::{
        , name = "Cadey"
        , mastodon = Some "@cadey@mst3k.interlinked.me"
        , preferred_language = "en-US"
      -- ...

Which gives us:

$ dhall-to-yaml --file example.dhall
Error: Expression doesn't match annotation

{ + preferred_language : …
, …

4│         User::{ name = "Cadey", mastodon = Some "@cadey@mst3k.interlinked.me",
5│       preferred_language = "en-US" }


Or this more detailed explanation if you add the --explain flag to the dhall-to-yaml call.

We tried to do something that violated the contract that the type specified. This means that it's an invalid configuration and is therefore rejected and no yaml file is created.

The Dhall Kubernetes package specifies record types for every object available by default in Kubernetes. This does mean that the package is incredibly large, but it also makes sure that everything you could possibly want to do in Kubernetes matches what it expects. In the package documentation, they give an example where a Deployment is created.

-- examples/deploymentSimple.dhall

-- Importing other files is done by specifying the HTTPS URL/disk location of
-- the file. Attaching a sha256 hash (obtained with `dhall freeze`) allows
-- the Dhall compiler to cache these files and speed up configuration loads
-- drastically.
let kubernetes =
let deployment =
      , metadata = kubernetes.ObjectMeta::{ name = "nginx" }
      , spec =
            , replicas = Some 2
            , template =
                , metadata = kubernetes.ObjectMeta::{ name = "nginx" }
                , spec =
                      , containers =
                          [ kubernetes.Container::{
                            , name = "nginx"
                            , image = Some "nginx:1.15.3"
                            , ports =
                                [ kubernetes.ContainerPort::{
                                  , containerPort = 80

in  deployment

Which creates the following yaml:

apiVersion: apps/v1
kind: Deployment
  name: nginx
  replicas: 2
      name: nginx
        - image: nginx:1.15.3
          name: nginx
            - containerPort: 80

Dhall's lambda functions can help you break this into manageable chunks. For example, here's a Dhall function that helps create a docker image reference:

let formatImage
    : Text -> Text -> Text
    = \(repository : Text) -> \(tag : Text) ->

in formatImage "xena/christinewebsite" "latest"

Which outputs xena/christinewebsite:latest when passed to dhall text.

All of this adds up into a powerful toolset that lets you express Kubernetes configuration in a way that does what you want without as many headaches.

Most of my apps on Kubernetes need only a few generic bits of configuration:

  • Their name
  • What port should be exposed
  • The domain that this service should be exposed on
  • How many replicas of the service are needed
  • Which Let's Encrypt Issuer to use (currently only "prod" or "staging")
  • The configuration variables of the service
  • Any other containers that may be needed for the service

From here, I defined all of the bits and pieces for the Kubernetes manifests that Dyson produces and then created a Config type that helps to template them out. Here's my Config type definition:

let kubernetes = ../kubernetes.dhall

in  { Type =
        { name : Text
        , appPort : Natural
        , image : Text
        , domain : Text
        , replicas : Natural
        , leIssuer : Text
        , envVars : List kubernetes.EnvVar.Type
        , otherContainers : List kubernetes.Container.Type
    , default =
        { name = ""
        , appPort = 5000
        , image = ""
        , domain = ""
        , replicas = 1
        , leIssuer = "staging"
        , envVars = [] : List kubernetes.EnvVar.Type
        , otherContainers = [] : List kubernetes.Container.Type

Then I defined a makeApp function that creates everything I need to deploy my stuff on Kubernetes:

let Prelude = ../Prelude.dhall

let kubernetes = ../kubernetes.dhall

let typesUnion = ../typesUnion.dhall

let deployment = ../http/deployment.dhall

let ingress = ../http/ingress.dhall

let service = ../http/service.dhall

let Config = ../app/config.dhall

let K8sList = ../app/list.dhall

let buildService =
        \(config : Config.Type)
      -> let myService = service config

         let myDeployment = deployment config

         let myIngress = ingress config

         in  K8sList::{
             , items =
               [ typesUnion.Service myService
               , typesUnion.Deployment myDeployment
               , typesUnion.Ingress myIngress

in  buildService

And used it to deploy the h language website:

let makeApp = ../app/make.dhall

let Config = ../app/config.dhall

let cfg =
      , name = "hlang"
      , appPort = 5000
      , image = "xena/hlang:latest"
      , domain = "h.christine.website"
      , leIssuer = "prod"

in  makeApp cfg

Which produces the following Kubernetes config:

apiVersion: v1
  - apiVersion: v1
    kind: Service
        external-dns.alpha.kubernetes.io/cloudflare-proxied: "false"
        external-dns.alpha.kubernetes.io/hostname: h.christine.website
        external-dns.alpha.kubernetes.io/ttl: "120"
        app: hlang
      name: hlang
      namespace: apps
        - port: 5000
          targetPort: 5000
        app: hlang
      type: ClusterIP
  - apiVersion: apps/v1
    kind: Deployment
      name: hlang
      namespace: apps
      replicas: 1
          app: hlang
            app: hlang
          name: hlang
            - image: xena/hlang:latest
              imagePullPolicy: Always
              name: web
                - containerPort: 5000
            - name: regcred
  - apiVersion: networking.k8s.io/v1beta1
    kind: Ingress
        certmanager.k8s.io/cluster-issuer: letsencrypt-prod
        kubernetes.io/ingress.class: nginx
        app: hlang
      name: hlang
      namespace: apps
        - host: h.christine.website
              - backend:
                  serviceName: hlang
                  servicePort: 5000
        - hosts:
            - h.christine.website
          secretName: prod-certs-hlang
kind: List

And when I applied it on my Kubernetes cluster, it worked the first time and had absolutely no effect on the existing configuration.

In the future, I hope to expand this to allow for multiple deployments (IE: a chatbot running in a separate deployment than a web API the chatbot depends on or non-web projects in general) as well as supporting multiple Kubernetes namespaces.

Dhall is probably the most viable replacement to Helm or other Kubernetes templating tools I have found in recent memory. I hope that it will be used by more people to help with configuration management, but I can understand that that may not happen. At least it works for me.

If you want to learn more about Dhall, I suggest checking out the following links:

I hope this was helpful and interesting. Be well.

Live Streaming Server Setup

Permalink - Posted on 2020-01-11 00:00

Live Streaming Server Setup

I have set up my own RTMP server that allows me to live stream to my own infrastructure. This allows me to own my own setup and not need to rely on other services such as Twitch or YouTube. As a side effect of doing this, I can enable people who use my streaming server to use picture-in-picture mode in iPadOS without having to hack the streaming app, among other things.

This is part of my 2020 goal to reduce my dependencies on corporate social platforms as much as possible.

I chose to do my setup with a few key parts:

RTMP Server

I chose to use docker-nginx-rtmp as a pre-packaged solution for my RTMP server. This means I could set it up to ingest via my WireGuard VPN with very little work. Here is the docker command I run on my VPN host:

$ docker run \
  --restart always \
  -dit \
  -p \
  -p \
  --name rtmp-server \

This starts my RTMP server in a container named rtmp-server and automatically restarts it when it goes down. The IP address in the first --port (-p) flag is the VPN IP address of my main VPN server. This makes me have to be behind my VPN in order to stream to my server, given the total lack of authentication that's involved in RTMP.


I have a custom stream page set up on my server that has a friendly little wrapper to the video player. Here is the source code for it. It's very short and easy to follow. I have these files at /srv/http/home.cetacean.club on my VPN server.

This wraps hls.js so that users on every browser I care to support can watch the stream as it happens.


In order to expose the stream data to the world, I use Caddy as a reverse proxy. Here is the configuration that I use for Caddy:

home.cetacean.club {
  # Set up automagic Let's Encrypt
  tls me@christine.website

  # Proxy the playlist, stream data 
  # and statistics to the rtmp server
  proxy /hls
  proxy /live
  proxy /stat

  # make /stream.html show up as /stream
  ext .html
  # serve data out of /srv/http/home.cetacan.club
  # you can put your HTTP document root
  # anywhere you want, but I like it being
  # here.
  root /srv/http/home.cetacean.club

For more information on the Caddy configuration directives used here, see the following:


Live streaming like this uses ABSURD amounts of bandwidth. Do not set this up on a server that has limited bandwidth. If you need a server that has unlimited bandwidth, check out SoYouStart. It's what I use.

There isn't a good story for recording or announcing streams to this server automatically. I don't consider this a problem, as links can always be sent out manually on social media platforms.

I hope this little overview of my setup was informative. I'll be streaming there very irregularly, mostly as time permits/the spirit moves me. I plan to stream art, gaming and code.

Thanks for reading, have a good day.

Cadey Alicia Ratio

Permalink - Posted on 2020-01-09 00:00

Created with Procreate on iPadOS using an iPad Pro and an Apple Pencil.

Time-lapse video - [PSD on my Patreon] (https://www.patreon.com/posts/i-arted-my-33016810)

Based on a base by DeviantArtist Kawaii-princess-paws.

V is for Vvork in Progress

Permalink - Posted on 2020-01-03 00:00

V is for Vvork in Progress

So, December has come and passed. I'm excited to see V 1.0 get released as a stable production-ready release so I can write production applications in it!

NOTE: I was asked to write this post after version 1.0 was released in December.

Looking at the description of their github repo over time, let's see how things changed:

Date from Archive.org Stable Release Date
April 24, 2019 Not mentioned
June 22, 2019 Implied June 22, 2019
June 23, 2019 Not mentioned
July 21, 2019 1.0 December 2019
September 8, 2019 1.0 December 2019
October 26, 2019 1.0 December 2019
November 19, 2019 0.2 November 2019
December 4, 2019 0.2 December 2019

As of the time of writing this post, it is January third, 2020 and the roadmap is apparently to release V 0.2 this month.

Let's see what's been fixed since my last article.

Compile Speed

I have gotten feedback that the metric I used for testing the compile speed claims was an unfair benchmark. Apparently it's not reasonable to put 1.2 million printfs in the same function. I'm going to fix this by making the test a bit more representative of real world code.

#!/usr/bin/env moon
-- this is Moonscript code: https://moonscript.org

with io.popen "mkdir hellomodule"
  print \read "*a"

for i=1, 1000
  with io.open "hellomodule/file_#{i}.v", "w"
    \write "module hellomodule\n\n"
    for j=1, 1200
      \write "pub fn print_#{i}_#{j}() { println('hello, #{i} #{j}!') }\n\n"

This creates 1000 files with 1200 functions in them. These numbers were derived from the greatest factor pairs of 1.2 million. If V lives up to its claims that it can build 1.2 million lines of code in a second, this should only take one second to run:

$ moon gen.moon
$ time ~/code/v/v build module $(pwd)/hellomodule/
Building module "hellomodule" (dir="/home/cadey/tmp/vmeme/moon/hellomodule")...
Generating a V header file for module `/home/cadey/tmp/vmeme/moon/hellomodule`
Building /home/cadey/.vmodules//home/cadey/tmp/vmeme/moon/hellomodule.o...
599.37user 13.35system 10:16.92elapsed 99%CPU (0avgtext+0avgdata 17059740maxresident)k
0inputs+2357808outputs (0major+7971041minor)pagefaults 0swaps

It took over 10 minutes to compile 1.2 million lines of code. Some interesting statistics about this run:

  • GCC's oom score from the kernel task scheduler topped out at over 496
  • GCC used over 16 GB of ram
  • The V compiler used over 3 GB of ram
  • This is an average of 2000 lines of code per second!

As of the time of writing this article, the main V website mentions that the compiler should handle 100,000 lines of code per second, or that it should compile code approximately 500 times as fast as it does currently.

This does not seem to be the case. It would be nice if the V author could clarify how he got his benchmarks and make his process public. Here's the /proc/cpuinfo of the machine I ran this test on:

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 58
model name      : Intel(R) Xeon(R) CPU E3-1245 V2 @ 3.40GHz
stepping        : 9
microcode       : 0x20
cpu MHz         : 1596.375
cache size      : 8192 KB
physical id     : 0
siblings        : 8
core id         : 0
cpu cores       : 4
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca 
                  cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm
                  pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts 
                  rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni
                  pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 
                  xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer
                  aes xsave avx f16c rdrand lahf_lm cpuid_fault pti ssbd ibrs 
                  ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase 
                  smep erms xsaveopt dtherm ida arat pln pts flush_l1d
bugs            : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf
bogomips        : 6784.45
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:

The resulting object file is 280 MB (surprising given the output of the generator script was only 67 MB).

$ cd ~/.vmodules/home/cadey/tmp/vmeme/moon/

$ ls

$ du -hs hellomodule.o
280M    hellomodule.o

Let's see how big the resulting binary is for calling one of these functions:

// main.v
import mymodule

fn main() {
$ ~/code/v/v build main.v
main.v:1:14: cannot import module "mymodule" (not found)
    1| import mymodule
    3| fn main() {

...oh dear. Can someone file this as an issue for me? I was following the directions here and I wasn't able to get things working. I can't open issues myself because I've been banned from the V issue tracker, or I would have already.

Can we recover this with gcc? Let's get the symbol name with nm(1):

$ nm hellomodule.o  | grep print_1_1'$'
0000000000000000 T hellomodule__print_1_1

So the first print function is exported as hellomodule__print_1_1, and it was declared as:

pub fn print_1_1() { println('hello, 1 1!') }

This means we should be able to declare/use it like we would a normal C function that returns void and without arguments:

// main.c

void hellomodule__print_1_1();

void main__main() {

I copied hellomodule.o to the current working directory to test this. I also used the C output of the hello world program below and replaced the main__main function with a forward declaration. I called this hello.c. This is a very horrible no good hack but it worked enough to pass the linker's muster. Not doing this caused this shower of linker errors.

$ gcc -o main.o -c main.c
$ gcc -o hello.o -c hello.c
$ gcc -o main hellomodule.o main.o hello.o
$ ./main
hello, 1 1!

$ du -hs main
179M    main

Yikes. Let's see if we can reduce the binary size at all. strip(1) usually helps with this:

$ strip main
$ du -hs main
121M    main

Well that's a good chunk of it shaved off at least. It looks like there's no dead code elimination at play. This probably explains why the binary is so big.

$ strings main | grep hello | wc -l

Yep! It has all the strings. That's gonna be big no matter what you do. Maybe there could be some clever snipping of things, but it's reasonable for that to not happen by default.

Hello World Leak

One of the things I noted in my last post was that the Hello world program leaked memory. Let's see if this still happens:

// hello.v
fn main() {
        println('Hello, world!')

$ ~/code/v/v build hello.v
$ valgrind ./hello
==31465== Memcheck, a memory error detector
==31465== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==31465== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright info
==31465== Command: ./hello
Hello, world!
==31465== HEAP SUMMARY:
==31465==     in use at exit: 0 bytes in 0 blocks
==31465==   total heap usage: 2 allocs, 2 frees, 2,024 bytes allocated
==31465== All heap blocks were freed -- no leaks are possible
==31465== For counts of detected and suppressed errors, rerun with: -v
==31465== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)

Nice! Let's see if the compiler leaks while building it:

$ valgrind ~/code/v/v build hello.v
==32295== Memcheck, a memory error detector
==32295== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==32295== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright info
==32295== Command: /home/cadey/code/v/v build hello.v
==32295== HEAP SUMMARY:
==32295==     in use at exit: 4,600,383 bytes in 74,522 blocks
==32295==   total heap usage: 76,590 allocs, 2,068 frees, 6,452,537 bytes allocated
==32295== LEAK SUMMARY:
==32295==    definitely lost: 2,372,511 bytes in 56,223 blocks
==32295==    indirectly lost: 2,210,724 bytes in 18,077 blocks
==32295==      possibly lost: 0 bytes in 0 blocks
==32295==    still reachable: 17,148 bytes in 222 blocks
==32295==         suppressed: 0 bytes in 0 blocks
==32295== Rerun with --leak-check=full to see details of leaked memory
==32295== For counts of detected and suppressed errors, rerun with: -v
==32295== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)

For comparison, this compile leaked 3,861,785 bytes of ram last time. This means that the compiler has overall gained 0.8 megabytes of leak in the last 6 months. This is worrying, given that V claims to not have a garbage collector. I can only wonder how much ram was leaked when building that giant module.

If your V program compiles, it's guaranteed that it's going to be leak free.

Quoted from here.

For giggles, let's see if V in module mode leaks ram somehow:

$ valgrind ./main
==15483== Memcheck, a memory error detector
==15483== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==15483== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright info
==15483== Command: ./main
hello, 1 1!
==15483== HEAP SUMMARY:
==15483==     in use at exit: 0 bytes in 0 blocks
==15483==   total heap usage: 2 allocs, 2 frees, 2,024 bytes allocated
==15483== All heap blocks were freed -- no leaks are possible
==15483== For counts of detected and suppressed errors, rerun with: -v
==15483== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)

Nope! The hello world memory leak was actually fixed!

Other Claims

  • Vweb was shipped
  • Hot code reloading was shipped
  • Code translation is still vaporware
  • The compiler generates direct machine code

Code Translation

I've been really looking forward to this to see how 1:1 it can make the output. Let's see if you can use it.

$ ~/code/v/v help | grep translate
  translate         Translates C to V. [wip, will be available in V 0.3]

$ ~/code/v/v translate
Translating C to V will be available in V 0.3 (January)

This is confusing to me given he claims that 0.2 will be out in January, but whatever I can let this slide.

The doom example is still only one file that doesn't even compile anymore.

I really do like how it handles extern functions though, you just declare them without bodies like C. Then it just figures things out for you. I wonder if this works with syscall functions too.

The Compiler Generates Direct Machine Code

In my testing I was unable to figure out how to get the V compiler to generate direct machine code. Until an example of this is released, I am quite skeptical of this claim.

Overall, V is a work in progress. It has made a lot of progress since the last time I talked about it, but the 1.0 release promise has been shattered. If I was going to suggest anything to the V author, don't give release dates or timetables. This kind of thing needs to be ready when it's ready and no sooner.

Also if you are writing a compiler and posting benchmarks, please make my life easier when trying to verify them. Put the entire repo you're using for the benchmarks somewhere. Include the exact commands you used to collect those benchmarks. Make it obvious how they were collected, what hardware they were run on, etc. This stuff really helps a lot when trying to verify them. Otherwise I have to guess, and I might get it wrong. I don't know if my benchmark is an entirely fair one, but given the lack of information on how to replicate it it's probably going to have to do.

Don’t ever, ever try to lie to the Internet, because they will catch you. They will deconstruct your spin. They will remember everything you ever say for eternity.

- Gabe Newell

How I set up an IRC daemon on Kubernetes

Permalink - Posted on 2019-12-21 00:00

How I set up an IRC daemon on Kubernetes

IRC. It's one of the last bastions of the old internet, and still an actively developed and researched protocol. Historically, IRC daemons have been notoriously annoying to set up and maintain. I have created an IRC daemon running on top of Kubernetes, which will hopefully help remove a lot of the pain points for my personal usage. Here's how I did it.

IRC is a simple protocol and only has a few major moving parts. IRC is made up of networks of servers that federate together as one logical unit. IRC is scalable from networks spanning one server to hundreds (though realistically you're not likely to find more than about 10 servers in a network).

At its core, IRC daemons are a pub-sub protocol that also has a distributed state layer on top of it. TCP connections can either represent individual users or server trunking. Each user has their own state (nickname, ident and "real name"). Users can join channels which can have their own state (modes, topic, timestamp and ban lists). Some servers have a limit of the number of channels you can join.

So, with this in mind, let's start with a simple IRC daemon in a docker container. I chose ngircd for this because it's packaged in Alpine Linux. Let's create the configuration file ngircd.conf:

Name = seaworld.yolo-swag.com
AdminInfo1 = ShadowNET Main Server
AdminInfo2 = New York, New York, USA
AdminInfo3 = Cadey Ratio <me@christine.website>
Info = Hosted on Kubernetes!
Listen =
MotdFile = /shadownet/motd
Network = ShadowNET
Ports = 6667
ServerGID = 65534
ServerUID = 65534

MaxJoins = 50
MaxNickLength = 31
MaxListSize = 100
PingTimeout = 120
PongTimeout = 20

AllowedChannelTypes = #&+
AllowRemoteOper = yes
CloakUserToNick = yes
DNS = no
Ident = no
IncludeDir = /shadownet/secret
MorePrivacy = yes
NoticeBeforeRegistration = yes
OperCanUseMode = yes
OperChanPAutoOp = yes
PAM = no
PAMIsOptional = yes
RequireAuthPing = yes
# WebircPassword is set in secrets

Name = #lobby
Topic = Welcome to the new ShadowNET!
Modes = tn

Name = #help
Topic = Get help with ShadowNET | Ping an oper for help
Modes = tn

Name = #opers
Topic = Oper hideout
Modes = tnO

This is mostly based on the default settings in the example configuration file with a few glaring exceptions:

  • The server name is seaworld.yolo-swag.com, which will show up when users are connecting
  • My information is filled out for the admin information (which is shown when a user does /ADMIN in their client)
  • It has a lot of privacy-enhancing features set up
  • It disables the need to authenticate with PAM before being allowed to connect to the IRC server
  • Some default channel names are reserved

So, let's create a dockerfile for this:

FROM xena/alpine
COPY motd /shadownet/motd
COPY ngircd.conf /shadownet/ngircd.conf
RUN apk --no-cache add ngircd
COPY run.sh /
CMD ["/run.sh"]

motd is a plain text file that is used as the "message of the day" when users connect. Servers usually list their rules here. My motd has some ascii art and has this extra info:

The *new* irc.yolo-swag.com!

Connect on irc.within.website port 6667 or r4qrvdln2nvqyfbq.onion:6667

- Don't do things that make me have to write more rules here
- This rule makes you breathe manually

Now you can build and push this image to the docker hub.

You may have noticed earlier that a comment in the config file mentioned [webirc][webirc]. This is important for us because IRC server normally assume that the remote host information in socket calls is accurate. My Kubernetes setup has at least one level of TCP proxying at work, so this cannot pan out. Webirc offers an authenticated mechanism to let a proxy server lie about user IP addresses. My nginx-ingress setup uses the haproxy PROXY protocol to let underlying services know client IP addresses. So what we need is an adaptor from haproxy PROXY protocol to webirc. I hacked one up:

package main

import (

	_ "github.com/joho/godotenv/autoload"
	irc "gopkg.in/irc.v3"

var (
	webircPassword = flag.String("webirc-password", "", "the password for WEBIRC")
	webircIdent    = flag.String("webirc-ident", "snet", "the ident for WEBIRC")
	webircHost     = flag.String("webirc-host", "", "the host to connect to for WEBIRC")
	port           = flag.String("port", "5667", "port to listen on for PROXY traffic")

func main() {

	list, err := net.Listen("tcp", ":"+*port)
	if err != nil {

	log.Printf("now listening on port %s, forwarding traffic to %s", *port, *webircHost)

	proxyList := &proxyproto.Listener{Listener: list}

	for {
		conn, err := proxyList.Accept()
		if err != nil {
		go dataTo(conn)

func dataTo(conn net.Conn) {
	defer conn.Close()

	ip, _, err := net.SplitHostPort(conn.RemoteAddr().String())
	if err != nil {
		log.Printf("what, can't split remote address: %v", err)
		ev := irc.Message{
			Command: "QUIT",
			Params: []string{

		fmt.Fprintln(conn, ev.String())

	peer, err := net.Dial("tcp", *webircHost)
	if err != nil {
		log.Println(*webircHost, err)
	defer peer.Close()

	spip := strings.Split(ip, ".")

	hostname := strings.Join([]string{
		Hash("snet", spip[0])[:8],
		Hash("snet", spip[0] + spip[1])[:8],
		Hash("snet", spip[0] + spip[1] + spip[2] + spip[3])[:8],
	}, ".")

	ev := irc.Message{
		Command: "WEBIRC",
		Params: []string{
	fmt.Fprintf(peer, "%s\r\n", ev.String())

	go io.Copy(conn, peer)
	io.Copy(peer, conn)

// Hash is a simple wrapper around the MD5 algorithm implementation in the
// Go standard library. It takes in data and a salt and returns the hashed
// representation.
func Hash(data string, salt string) string {
	output := md5.Sum([]byte(data + salt))
	return fmt.Sprintf("%x", output)

This proxies connections from incoming TCP sockets to the IRC server. It also creates a fancy hostname for ngircd to use when people do a /whois on users. ngircd does have its own cloaking mechanism (which I am not using here), but I figure doing the splitting on IP address classes will make a more easy way to reliably ban users from channels.

Now, let's build this as a docker image and push it to the docker hub:

FROM xena/go:1.13.5 AS build
WORKDIR /shadownet
COPY go.mod .
COPY go.sum .
ENV GOPROXY https://cache.greedo.xeserv.us
RUN go mod download
COPY cmd ./cmd
RUN GOBIN=/shadownet/bin go install ./cmd/proxy2webirc

FROM xena/alpine
COPY --from=build /shadownet/bin/proxy2webirc /usr/local/bin/proxy2webirc
CMD ["/usr/local/bin/proxy2webirc"]

And now we get to wire this all up in a kubernetes manifest. Let's create a namespace:

# 00_namespace.yml
apiVersion: v1
kind: Namespace
  name: ircd

And now we need to create the secrets that the IRC daemon will use when operating. We need the webirc password and a few operator blocks. Let's make a script to create operator blocks:

# scripts/makeoper.sh

echo "[Operator]
Name = $1
Password = $(uuidgen)"

Then let's use it to create a few operator configs:

$ scripts/makeoper.sh Cadey >> opers.conf
$ scripts/makeoper.sh h >> opers.conf

And then create the webirc password:

$ echo "[Options]
WebircPassword = $(uuidgen)" >> webirc.conf

And then let's load these into a yaml file:

# 01_secrets.yml
apiVersion: v1
kind: Secret
  name: config
  namespace: ircd
type: Opaque
  opers.conf: |
    <contents of opers.conf>
  webirc.conf: |
    <contents of webirc.conf>

Now all we need is the irc daemon deployment itself that ties this all together:

# 02_ircd.yml
apiVersion: apps/v1
kind: Deployment
  name: ircd
  namespace: ircd
    app: ircd
  replicas: 1
      name: ircd
        app: ircd
      - name: proxystrip
        image: shadownet/proxy2webirc:latest
        imagePullPolicy: Always
        - containerPort: 5667
          name: proxiedirc
          protocol: TCP
        - name: WEBIRC_HOST
        - name: WEBIRC_PASSWORD
          value: <password from webirc.conf>
      - name: ircd
        image: shadownet/ircd:latest
        imagePullPolicy: Always
        - name: secretconfig
          mountPath: "/shadownet/secret"
      restartPolicy: Always
      - name: secretconfig
          secretName: config
      app: ircd
apiVersion: v1
kind: Service
  name: ircd
  namespace: ircd
    app: ircd
  - port: 6667
    targetPort: 5667
    protocol: TCP
    app: ircd
  type: NodePort

This will set up our IRC daemon to read the secrets from the filesystem at /shadownet/secret, which was configured as the IncludeDir in the ngircd config above.

At this point, your IRC daemon is ready to go and can be applied to your cluster whenever you want, however it may be interesting to set up a tor onion address for the IRC server. Using the tor operator, we can create a private key locally, load it as a kubernetes secret and then activate the tor hidden service:

$ openssl genrsa -out private_key 1024
$ kubectl create secret -n ircd generic ircd-tor-key --from-file=private_key

Now apply this manifest:

# 03_onion.yml
apiVersion: tor.k8s.io/v1alpha1
kind: OnionService
  name: ircd
  version: 2
    app: ircd
    - targetPort: 6667
      publicPort: 6667
    name: ircd-tor-key 
    key: private_key

Now, you should be able to let users connect to your IRC server to their heart's content. If you want to join the IRC server I've set up, point your IRC client at irc.within.website. I'll be in #lobby.

Olin Improvements

Permalink - Posted on 2019-12-14 00:00

Olin Improvements

Over the last week or so I've been doing a lot of improvements to Olin in order to make it ready to be the kernel for the minimum viable product of wasmcloud. Here's an overview of the big things that have happened from version 0.1.1 to version 0.4.0.

What is Olin?

Olin is a userspace kernel designed for multi-tenant secure computing. It provides isolation via WebAssembly to limit the attack scope of malicious user input, resource accounting via its runtime statistics, and a familiar Unix-like API. It is the core that you can build a functions as a service platform on top of. For an example of Olin in action, please click here.

As Olin is just a kernel, it needs some work in order to really shine as a true child of the cloud. That work is incoming during the next weeks and months.


Here is what has been done since the last Olin post:

  • An official, automated build of the example Olin components has been published to the Docker Hub
  • The Go ABI has been deprecated for the moment
  • The entrypoint of Olin programs has changed to _start
  • The beginning of support in the Zig standard library
  • Official binfmt_misc rigging has been created for experimentation

Official Docker Hub Build

The Docker Hub repo xena/olin now is automatically built off of the latest master release of Olin.

To use this image, run the following commands:

$ docker pull xena/olin:latest
$ docker run --rm -it xena/olin:latest sh

Then you can use the cwa tool to run programs in /wasm. See cwa -help for more information.

Deprecation of Go Support

For the moment, I am deprecating support for Go in GOOS=js GOARCH=wasm. The ABI for the Go compiler in this mode is too unstable for me right now. If other people want to fix abi/wasmgo to support Go 1.13 and newer, I would be very welcome to the patches.

The Entrypoint is Now _start()

Early on in the experiments that make up Olin, I have made a mistake in my fundamental understanding of how operating systems run programs. I thought that the main function would return the exit code of the program. This is not the case. There is a small shim that wraps the main function of your language and passes the result of it to exit(). Olin now copies this behavior. In order to return a value to the Olin runtime, you can either call runtime_exit() or return from the _start() function to exit with 0. Many thanks to Andrew Kelly for helping me realize this error.

This behavior is copied in the Olin rust package, which now has a fancy macro to automate the creation of the _start() function.

I am waiting on Zig to release a new nightly version in order to enable it, but the bring-your-own-OS package support in Zig means that the Zig standard library is starting to be exposed into Olin programs. Here's an example based on the example program:

pub const os = @import("./olin/olin.zig");
const std = @import("std");

pub fn main() anyerror!void {
    std.debug.warn("All your base are belong to us.\n", .{});

binfmt_misc Rigging

For a while I've had a binfmt_misc configuration floating around in the Olin repo. Here's how to use it:

First, install Olin's cmd/cwa to /usr/local/bin:

$ cd cmd/cwa
$ go build
$ sudo mv cwa /usr/local/bin

Then activate the binfmt_misc configuration:

$ cd ../../run/binfmt_misc
$ cat cwa.cfg | sudo tee /proc/sys/fs/binfmt_misc/register

Then you can run Olin programs without calling cwa:

$ ./olinfetch.wasm


Policy Support

Olin now has a declarative policy engine for accessing external resources. This is inspired from OpenBSD pledge() and macOS sandboxing. These policies allow setting the following attributes:

  • Resources an Olin program can access, matched by regular expressions
  • Resources an Olin program CANNOT access, matched by regular expressions
  • The maximum amount of memory an Olin program can use
  • The maximum number of WebAssembly instructions a WebAssembly program can execute

Here's an example policy file intended to help with relaying webhooks:

## This is an example policy, the ## signifies this line is a comment.

## These are the URL patterns that this handler can open:
allow (

## These are the URL patterns that this handler cannot open:
disallow (

## This is the ram limit in pages (64k each):
ram-page-limit 128

## This is the gas limit in instructions:
gas-limit 1048576

This would allow a WebAssembly program to open a HTTP socket to https://tulpa.dev (my git server) and Discord, but disallows any administrative API calls to my git server. It also allows the Olin program to use up to 128 pages of memory (about 8MB, which goes surprisingly far) and 1.04 million instructions. If the handler tries to open any resource that is not explicitly allowed, it is killed. If the handler tries to open a resource that is explicitly forbidden, it is killed. If the handler uses too much ram or too many instructions, it is killed.

This allows handlers to safely process user controlled input and even use that as part of the call to the open function.

When policies are violated, the error thrown is a vibe check failure:

$ cwa -policy ../policy/testdata/gitea.policy httptest.wasm
httptest.wasm: 2019/12/13 13:16:15 info: making request 
  to https://xena.greedo.xeserv.us/files/hello_olin.txt
httptest.wasm: 2019/12/13 13:16:15 vibe check failed: 
  forbidden by policy
2019/12/13 13:16:15 httptest.wasm: exit status -1

runtime_exit() System Call

Along with making _start() the entrypoint, there comes a new problem: exiting. I fixed this by adding a runtime_exit() system call in Olin. When you call this function with the status code you want to return, execution of the Olin program instantly ends, uncleanly stopping everything and closing all files the program has open. This is similar to Linux's exit() system call.

It's probably best to save this call for cases where the program really can't/shouldn't continue executing, like for panic handlers.

Generic CGI Support

Previously there was a half-baked idea I called cwagi in Olin's codebase. The idea was to emulate part of how CGI worked in order to let Olin programs handle HTTP easily. I realize this was a mistake, so now it just supports normal CGI, conforming to RFC 3875.

End of File Error

One of the more common errors in operating systems is the "end of file" error. It is raised when a file has no more data in it. Olin didn't have this, but now it does.

Wasmcloud Features

Thanks to these improvements, the following wasmcloud features have been implemented:

  • Updating handlers (wasmcloud update) link
  • Deleting handlers (wasmcloud delete) link
  • Listing deleted handlers (wasmcloud list -show-deleted)
  • Brand outgoing HTTP requests link

As of the writing of this post, wasmcloud is currently 75% through the MVP development cycle. Here are the remaining issues:

  • CGI support for handlers link
  • Policy support for handlers link
  • Configuration variables for handlers link

Overall, this project is fun. Here's to 1.0 happening soon! Be well.

Wasmcloud Progress: Hello, World!

Permalink - Posted on 2019-12-08 00:00

Wasmcloud Progress: Hello, World!

I have been working off and on over the years and have finally created the base of a functions as a service backend for WebAssembly code. I'm code-naming this wasmcloud. Wasmcloud is a pre-alpha prototype and is currently very much work in progress. However, it's far enough along that I would like to explain what I have been doing for the last few years and what it's all built up to.

Here is a high level view of all of the parts that make up wasmcloud and how they correlate:

wasmcloud graphviz dependency map

Land: The Beginning

A little bit after I found WebAssembly I started to play with it. It seemed like it was too good to be true. A completely free and open source VM format that would run on almost any platform? Sounds like the kind of black magick witchcraft you hear about on Star Trek.

However, I kept at it and continued experimenting. I eventually came up with Land. This was a very simple thing and was really used to help me invent Dagger.

Dagger was an attempt at an incredible amount of minimalism. I based it on an extreme interpretation of the Unix philosophy (everything is a file -> everything is a bytestream) combined with some Plan 9 for flavor. It had only 5 system calls:

  • open() - opens a stream by URL, returning a stream descriptor
  • close() - closes a stream descriptor
  • read() - reads from a stream
  • write() - writes to a stream
  • flush() - flushes intermediate data and turns async behavior into syncronous behavior

And yet this was enough to implement a HTTP client.

The core guiding idea was that a cloud-native OS API should expose internet resources as easily as it exposes native resources. It should be as easy to use WebSockets as it is to use normal sockets. Additionally, all of the details should be abstracted away from the WebAssembly module. DNS resolution is not its job. TLS configuration is not its job. Its job is to run your code. Everything else should just be provided by the system.

I wrote a blogpost about this work and even did a talk at GoCon Canada about it.

And this worked for several months as I learned WebAssembly and started to experiment with bigger and better things.

Olin: Phase 2

Land taught me a lot. I started to quickly run into the limits of Dagger though. I ended up needing calls like non-cryptographic entropy, environment variables, command-line arguments and getting the current time. After doing some research (and trying/failing to implement my own such API based on newlib) I found a library and specification called CommonWA. This claimed to offer a lot of what I was looking for. Namely URLs as filenames and all of the host interop support I could hope for. I named this platform Olin, or the One Language Intelligent Network.

However the specification was somewhat dead. The author of it had largely moved on to more ferrous pastures and I became one of the few users of it. I ended up forking the specification and implementing my view of what it should be.

I ended up implementing a Rust implementation of the guest -> host API for the Webassembly side of things. I forked some of the existing Rust code for this and gradually started adding more and more things. The test harness is the biggest wasm program I've written for a while. Seriously, there's a lot going on there. It tests every single function exposed in the CWA spec as well as all of the schemes I had implemented.

Over time I ended up testing Olin in more and more places and on more and more hardware. As a side effect of all of this being pure go, it was very easy to cross compile for PowerPC, 32 bit arm (including a $9 arm board that lives under my desk) and even other targets that gccgo supports. I even ended up porting part of TempleOS to Olin as a proof of concept, but have more plans in the future for porting other parts of its kernel as a way to help people understand low-level operating system development.

I've even written a few blogposts about Olin:

But, this was great for running stuff interactively and via the command line. It left me wanting more. I wanted to have that mythical functions as a service backend that I've been dreaming of. So, I created wasmcloud.


As an interlude, I also created the h programming language during this time as a satirical parody of V. This ended up helping me test a lot of the core functionality that I had built up with Olin. Here's an example of a program in h:


And this compiles to:

 (import "h" "h" (func $h (param i32)))
 (func $h_main
       (local i32 i32 i32)
       (local.set 0 (i32.const 10))
       (local.set 1 (i32.const 104))
       (local.set 2 (i32.const 39))
       (call $h (get_local 1))
       (call $h (get_local 0))
 (export "h" (func $h_main))

This ends up printing:


I think this is the smallest (if not one of the smallest) quine generator in the world. I even got this program running on bare metal:


Wasmcloud is the culmination of all of this work. The goal of wasmcloud is to create a functions as a service backend for running people's code in an isolated server-side environment.

Users can use the wasmcloud command line tool to do everything at the moment:

$ wasmcloud
Usage: wasmcloud <flags> <subcommand> <subcommand args>

        commands         list all command names
        flags            describe all known top-level flags
        help             describe subcommands and their syntax

Subcommands for api:
        login            logs into wasmcloud
        whoami           show information about currently logged in user

Subcommands for handlers:
        create           create a new handler
        logs             shows logs for a handler

Subcommands for utils:
        namegen          show information about currently logged in user
        run              run a webassembly file with the same environment as production servers

Top-level flags (use "wasmcloud flags" for a full list):
  -api-server=http://wasmcloud.kahless.cetacean.club:3002: default API server
  -config=/home/cadey/.wasmc.json: default config location

This tool lets you do a few basic things:

  • Authenticate with the wasmcloud server
  • Create handlers from WebAssembly files that meet the CommonWA API as realized by Olin
  • Get logs for individual handler invocations
  • Run WebAssembly modules locally like they would get run on wasmcloud

Nearly all of the complexity is abstracted away from users as much as possible.

Future Steps

In the future I hope to do the following things:

  • Support updating handlers to new versions of the code
  • Support live-streaming of logs
  • Support handler deletion
  • Support bulk queue export
  • Support wasi for easier interoperability
  • Support more resource types such as websockets
  • Investigate porting the wasmcloud executor to Rust
  • Documentation/a book on how to use wasmcloud
  • Create an easier way to create accounts that can make handlers
  • Deploy to production somewhere


Every single one of these people was immeasurably helpful in this research over the years.

And many more I can't remember because it's been so many.

If you want to support my work, please do so via Patreon. It really means a lot to me and helps to keep the dream alive!

Toast Sandwich Recipe

Permalink - Posted on 2019-12-02 00:00

Toast Sandwich Recipe

Toast sandwiches. The concept may seem bizarre but the result is actually quite a delicious traditional meal. My great grandmother (twice removed) made these every day for us whenever we came over to visit. On her deathbed she made us swear that we would spread the joy and craft of toast sandwiches to the world.

Toast sandwiches date back to rural parts of England. A recipe book from 1861 is seen as the authoritative view of this practice. The book is a collection of recipes for various types of sandwiches. The first recipe is for a roast beef sandwich with a white bread and a slice of fresh tomato. This classic book truly stood the test of time and made it possible for future culinary artists to create a desired experience.

A lot of the sandwich recipes are also available in English and French. I've been making these with my grandmother's recipe and I've been making them for our family for years. I'm sure that the recipes are not only delicious but also practical and well-suited to the modern day. We've been making our own bread and using a variety of ingredients and techniques to make the sandwiches.

I also have a recipe for a classic Italian sandwich. It's a very popular Italian sandwich and I've made it many times. It's a great recipe to make and it's a great way to get a taste of Italian culture and food. I'm proud of it all.

Toast is an essential of the modern breakfast menu. It is created using a few fantastically complicated scientific processes yet it's trivial enough that you can buy a machine for $20 that will do it for you. There's even a fully automated toaster that capitalism won't let us have. It's a good thing we have a toaster. It's also a good thing that it's expensive. I'm not going to spend a fortune on a toaster. I'm going to buy a toaster.

But we don't have a toaster in our house. We have a toaster oven. And a toaster oven is a very simple appliance to make. All you need is a toaster and a bit of time. It's very easy to make. It takes about 10 minutes and you can use it to toast your eggs, toast your toast, toast your toast and toast your toast. It's that simple. But it's not that simple. It's not that easy to make. And it's not that easy to do. We have no toaster oven in our house. We have a toaster oven that we buy at the store.

My good friend Nicole loves these sandwiches, making it the thing she asks for time and time again. And she makes them for me. And I love them. And I'm going to show you how to make them for yourself and your friends. You're going to make these sandwiches. You're going to make these sandwiches.

By the way, have you heard about our lord and savior the instant pot? It's a pressure cooker made for the busy person in your life. It's a pressure cooker that you can use to make a batch of sandwiches in less than 30 minutes including the time it takes to cool down.

Thanks for reading my article on toast sandwiches. Hopefully this should help you make them. Don't forget the salt.

Orca Stranding

Permalink - Posted on 2019-11-16 00:00

Created with Procreate on iPadOS using an iPad Pro and an Apple Pencil.

Time-lapse video

Idea from this screenshot of Death Stranding.

The Gears and The Gods

Permalink - Posted on 2019-11-14 00:00

The Gears and The Gods

If there are any gods in computing, they are the authors of compilers. The output of compilers is treated as a Heavenly Decree, sometimes used for many sprints or even years after the output has been last emitted.

People trust this output to be Correct. To tell the machine what to do and by its will it be done. The compiler is itself a factory of servitors, each bound by the unholy runes inscribed into it in order to make the endless sequence of lights change colors in the right patterns.

The output of the work of the Gods is stored for later use when their might is needed. The work of the Gods however is a very fickle beast. Their words of power only make the gears turn when they are built with very specific gearing.

This means that people who rely on these sacred runes have to chain themselves to gearing patterns. Each year new ways of tricking the gears to run faster are developed. The ways the gears turn can be learned to be abused however to spill the secrets other gears are crunching on. These gearing patterns haven’t seen any real fundamental design changes in decades, because you never know when the output of the Old Gods is needed.

This means that the gears themselves are the chains that bind people to the past. The gears of computation. The gears made of sand we tricked into thinking with lightning.

But now the gears show their age. The gearing on the side of the gearing on the side of the gearing on the side of the gearing shows its ugly head.

But the Masses never question it. Even though they take hit after hit to performance of the gears.

What there needs to be is some kind of Apocalypse, a revealing of the faults in the gears. Maybe then the Masses will start to question their blind loyalty and chains binding them to the gears. Maybe they would be able to even try other gear patterns.

But this is just fantasy, nobody would WILLINGLY change the gearing patterns.

Would they?

But what about the experience they’ve come to expect from their old gears? Where they could swap out inputs to the gears with ease. Where the Output of the Gods of old still functions.

There needs to be a Better Way to switch gearings. But this kind of solution isn’t conducive to how people use the gears. People use the gears they do because they don’t care. They just want things to work “like they expect it to” and ignore things that don’t feed this addiction.

And THIS is why I’m such a big advocate for WebAssembly on the server. This lets you take the output of the Gods and store it in a way that it can be transparently upgraded to new sets of gearing. So that the future and the past can work in unison instead of being enemies.

Now, all that's left is to build a bridge. A bridge that will help to unite the past, the present and the future into a woven masterpiece of collaborative cocreation. Where the output of the gods is a weaker chain to the gears of old and can easily be adapted to the gears of new. Even the gears that nobody's even dreamed of yet.

Death Stranding Review

Permalink - Posted on 2019-11-11 00:00

Death Stranding Review

NOTE: There's gonna be spoilers here. Do not read if you are not okay with this. For a summary of the article without spoilers, this game is 10 out of 10 game of the year 2019 for me.

I have also been playing through this game on twitch and have streams archived here.

There's a long-standing rule of thumb to tell fiction apart from non-fiction. Fiction needs to make sense to the reader. Non-fiction does not. Death Stranding puts this paradigm on its head. It doesn't make sense out of the gate in the way most AAA games make sense.

In many AAA games it's very clear who the Big Bad is and who the John America is. John America defeats the Big Bad and spreads Freedom to the masses by force. In Death Stranding, you have no such preconceptions going into it. The first few hours are a chaotic mess of exposition without explanation. First there's a storm, then there's monsters, then there's a baby-powered locator, then you need to deliver stuff a-la fetch quests, then there's Monster energy drinks, and the main currency of this game is Facebook likes (that mean and do absolutely nothing).

In short, Death Stranding doesn't try to make sense. It leaves questions unanswered. And this is honestly so refreshing in a day and age where entire plot points and the like are spoiled in trailers before the game's release date is even announced. Questions like: what is going on? Why are there monsters? What is the point of the game? Why the hell are there Monster energy drinks in your private room and canteen? Death Stranding answers only some of these over the course of gameplay.

The core of the gameplay loop is delivering cargo from point a to point b across a ruined America after the apocalypse. The main character is an absolute unit of a lad, able to carry over 120 kilograms of cargo on his back. As more and more cargo stacks up you create these comically tall towers of luggage that make balancing very difficult. You can hold on for balance using both of the shoulder buttons. The game maps each shoulder button to an arm of the player character. There's also a stamina system, and while you are gripping the cargo your stamina regenerates much more slowly than if you weren't doing that.

The game makes you deliver almost everything you can think of from medical aid to antimatter bombs. The antimatter bomb deliveries are really tricky because of how delicate they are. If you drop the antimatter bomb, it explodes and you instantly game over. If you hit a rock while carrying an antimatter bomb, it gets damaged. If it gets damaged too many times it explodes and you die. If it gets dropped into water it explodes and you die. And you have to carry the suckers over miles of terrain and even mountains.

This game handles scale very elegantly. The map is huge, even larger than Skyrim or Breath of the Wild. You are the UPS man who delivers packages, apocalypse be damned. This game gives you a lot of quiet downtime, which really lets you soak in the philosophical mindfuck that Kojima cooked up for us all. As you approach major cities, guitar and vocal music comes in and the other sound effects of the game quiet down. It overall creates a very sobering and solemn mood that I just can't get enough of. It seems like it wouldn't fit in a game where you use your own blood to defeat monsters and drink monster energy out of your canteen, but it totally does.

There is some mild product placement. Your canteen is full of Monster energy drink. Yes, that Monster. Making the player defecate shows you an ad for an AMC show. There's also monster energy drinks in your safe room that increase your max stamina for a bit. I'm not entirely sure if the product placement was chosen to be there for artistic reasons (it's surreal as all hell and helps to complement the confusing aspects of the game), but it's very non-intrusive and can be ignored with little risk.

This game also has online components. Every time you build a structure in areas linked to the chiral network, other players can use, interact with and upgrade them so they can do more things. Other players can also gives you likes, which again do nothing. Upgrading a zipline makes it able to handle a larger distance or updating a safe house lets you play music when people walk by it. It really helps to build the motif of rebuilding America. There is however room for people to troll others. Here's an example of this. There's a troll ladder to nowhere. There's a lot of those laying around mountains, so be on your guard.

Overall, Death Stranding is a fantastic game. It's hard. It's unforgiving. But the real thing that advances is the skill of the player. You make the deliveries. You go the distance. You do your job as the post-apocalyptic UPS man that America needs.

UPS Simulator 2019

By mmmintdesign source

Score: 10 out of 10
Christine Dodrill's Game of the Year 2019


Permalink - Posted on 2019-11-01 00:00

Created with Affinity Designer on iPadOS using an iPad Pro and an Apple Pencil.

Blog Feature: Art Gallery

Permalink - Posted on 2019-11-01 00:00

Blog Feature: Art Gallery

I have just implemented support for my portfolio site to also function as an art gallery. See all of my posted art here.

I have been trying to get better at art for a while and I feel I'm at the level where I feel comfortable putting it on my portfolio. Let's see how far this rabbit hole goes.

Also this is my 100th post! Yay!

Get Going: Hello, World!

Permalink - Posted on 2019-10-28 00:00

Get Going: Hello, World!

This post is a draft of the first chapter in a book I'm writing to help people learn the Go programming language. It's aimed at people who understand the high level concepts of programming, but haven't had much practical experience with it. This is a sort of spiritual successor to my old Getting Started with Go post from 2015. A lot has changed in the ecosystem since then, as well as my understanding of the language.

Like always, feedback is very welcome. Any feedback I get will be used to help make this book even better.

This article is a bit of an expanded version of what the first chapter will eventually be. I also plan to turn a version of this article into a workshop for my dayjob.

What is Go?

Go is a compiled programming language made by Google. It has a lot of features out of the box, including:

  • A static type system
  • Fast compile times
  • Efficient code generation
  • Parallel programming for free*
  • A strong standard library
  • Cross-compilation with ease (including webassembly)
  • and more!

* You still have to write code that can avoid race conditions, more on those later.

Why Use Go?

Go is a very easy to read and write programming language. Consider this snippet:

func Add(x int, y int) int {
  return x + y

This function wraps integer addition. When you call it it returns the sum of x and y.

Installing Go


Installing Go on Linux systems is a very distribution-specific thing. Please see this tutorial on DigitalOcean for more information.


  • Go to https://golang.org/dl
  • Download the .pkg file
  • Double-click on it and go through the installer process


  • Go to https://golang.org/dl
  • Download the .msi file
  • Double-click on it and go through the installer process

Next Steps

These next steps are needed to set up your shell for Go programs.

Pick a directory you want to store Go programs and downloaded source code in. This is called your GOPATH. This is usually the go folder in your home directory. If for some reason you want another folder for this, use that folder instead of $HOME/go below.


This next step is unfortunately shell-specific. To find out what shell you are using, run the following command in your terminal:

$ env | grep SHELL

The name at the path will be the shell you are using.


If you are using bash, add the following lines to your .bashrc (Linux) or .bash_profile (macOS):

export GOPATH=$HOME/go
export PATH="$PATH:$GOPATH/bin"

Then reload the configuration by closing and re-opening your terminal.


If you are using fish, create a file in ~/.config/fish/conf.d/go.fish with the following lines:

set -gx GOPATH $HOME/go
set -gx PATH $PATH "$GOPATH/bin"

If you are using zsh, add the following lines to your .zshrc:

export GOPATH=$HOME/go
export PATH="$PATH:$GOPATH/bin"


Follow the instructions here.

Installing a Text Editor

For this book, we will be using VS Code. Download and install it from https://code.visualstudio.com. The default settings will let you work with Go code.

Hello, world!

Now that everything is installed, let's test it with the classic "Hello, world!" program. Create a folder in your home folder Code. Create another folder inside that Code folder called get_going and create yet another subfolder called hello. Open a file in there with VS Code (Open Folder -> Code -> get_going -> hello) called hello.go and type in the following:

// Command hello is your first Go program.
package main

import "fmt"

func main() {
  fmt.Println("Hello, world!")

This program prints "Hello, world!" and then immediately exits. Here's each of the parts in detail:

// Command hello is your first go program.
package main                   // Every go file must be in a package. 
                               // Package main is used for creating executable files.

import "fmt"                   // Go doesn't implicitly import anything. You need to 
                               // explicitly import "fmt" for printing text to 
                               // standard output.

func main() {                  // func main is the entrypoint of the program, or 
                               // where the computer starts executing your code
  fmt.Println("Hello, world!") // This prints "Hello, world!" followed by a newline
                               // to standard output.
}                              // This ends the main function

Now click over to the terminal at the bottom of the VS Code window and run this program with the following command:

$ go run hello.go
Hello, world!

go run compiles and runs the code for you, without creating a persistent binary file. This is a good way to run programs while you are writing them.

To create a binary, use go build:

$ go build hello.go
$ ./hello
Hello, world!

go build has the compiler create a persistent binary file and puts it in the same directory as you are running go from. Go will choose the filename of the binary based on the name of the .go file passed to it. These binaries are usually static binaries, or binaries that are safe to distribute to other computers without having to worry about linked libraries.


The following is a list of optional exercises that may help you understand more:

  1. Replace the "world" in "Hello, world!" with your name.
  2. Rename hello.go to main.go. Does everything still work?
  3. Read through the documentation of the fmt package.

And that about wraps it up for Lesson 1 in Go. Like I mentioned before, feedback on this helps a lot.

Up next is an overview on data types such as integers, true/false booleans, floating-point numbers and strings.

I plan to post the book source code on my GitHub page once I have more than one chapter drafted.

Thanks and be well.


Permalink - Posted on 2019-10-22 00:00


Within Security Advisory

Multiple vulnerabilities in the mysqljs API and code.

Security Warning Level: yikes/10


There are multiple issues exploitable by local and remote actors in mysqljs. These can cause application data leaks, database leaks, SQL injections, arbitrary code execution, and credential leaks among other things.

Mysqljs is unversioned, so it is very difficult to impossible to tell how many users are affected by this and what users can do in order to ensure they are patched against these critical vulnerabilities.


Mysqljs is a library intended to facilitate prototyping web applications and mobile applications using technologies such as PhoneGap or Cordova. These technologies allow developers to create a web application that gets packaged and presented to users as if it was a native application.

This library is intended to help with developers creating persistent storage for these applications.

Issues in Detail

There are at least seven vulnerabilities with this library, each of them will be outlined below with a fairly vague level of detail.

mysql.js is NOT versioned

The only version information I was able to find are the following:

  • The Last-Modified date of Friday, March 11 2016
  • The ETag of 80edc3e5a87bd11:0

These header values correlate to a vulnerable version of the mysql.js file.

An entire copy of this file is embedded for purposes of explanation:

var MySql = {
    _internalCallback : function() { console.log("Callback not set")},
    Execute: function (Host, Username, Password, Database, Sql, Callback) {
        MySql._internalCallback = Callback;
        // to-do: change localhost: to mysqljs.com
        var strSrc = "http://mysqljs.com/sql.aspx?";
        strSrc += "Host=" + Host;
        strSrc += "&Username=" + Username;
        strSrc += "&Password=" + Password;
        strSrc += "&Database=" + Database;
        strSrc += "&sql=" + Sql;
        strSrc += "&Callback=MySql._internalCallback";
        var sqlScript = document.createElement('script');
        sqlScript.setAttribute('src', strSrc);

Fundamental Operation via Cross-Site Scripting

The code operates by creating a <script> element. The Javascript source of this script is dynamically generated by the remote API server. This opens the door for many kinds of Cross-Site Scripting attacks.

Especially because:

Credentials Exposed over Plain HTTP

The script works by creating a <script> element pointed at a HTTP resource in order to facilitate access to the MySQL Server. Line 6 shows that the API server in question is being queried over UNENCRYPTED HTTP.

var strSrc = "http://mysqljs.com/sql.aspx?";

Credentials and SQL Queries Are Not URL-Encoded Before Adding Them to a URL

Credentials and SQL queries are not URL-encoded before they are added to the strSrc URL. This means that values may include other HTTP parameters that could be evaluated, causing one of the two following:

Potential for SQL Injection from Malformed User Input

It appears this API works by people submitting plain text SQL queries. It is likely difficult to write these plain text queries in a way that avoids SQL injection attacks.

Potential for Arbitrary Code Execution

Combined with the previous issues, a SQL injection that inserts arbitrary Javascript into the result will end up creating an arbitrary code execution bug. This could let an attacker execute custom Javascript code on the page, which may have even more disastrous consequences depending on the usage of this library.

Server-Side Code has Unknown Logging Enabled

This means that user credentials and database results may be logged, stored and leaked by the mysql.js API server without user knowledge. The server that is running the API server may also do additional logging of database credentials and results without user knowledge.

Encourages Bad Practices

Mysql.js works by its API server dialing out an UNENCRYPTED connection to your MySQL server over the internet. This requires exposing your MySQL server to the internet. This means that user credentials are vulnerable to anyone who has packet capture abilities.

Mysql.js also encourages developers commit database credentials into their application source code. Cursory searching of GitHub has found this. I can only imagine there are countless other potential victims.

Security Suggestions

  • Do not, under any circumstances, allow connections to be made without the use of TLS (HTTPS).
  • Version the library.
  • Offer the source code of the API server to allow users to inspect it and ensure their credentials are not being stored by it.
  • Detail how the IIS server powering this service is configured, proving that it is not keeping unsanitized access logs.
  • Ensure all logging methods sanitize or remove user credentials.
  • URL-encode all values being sent as part of a URL.
  • Do not have your service fundamentally operate as a Cross-Site Scripting attack.
  • Do not, under any circumstances, encourage developers to put database credentials in the source code of front-end web applications.

In summary, we label this a solid yikes/10 in terms of security. It would be advisable for current users of this library to re-evaluate the life decisions that have lead them down this path.


Über thanks to jadr2ddude for helping with identifying the unfortunate scope of these massive security issues.

Hyper thanks to J for coming up with a viable GitHub search for potentially affected users.

Outsider Art and Anathema

Permalink - Posted on 2019-10-21 00:00

Outsider Art and Anathema

This was going to be a post about Urbit at first; but in the process of discussing about my interest in writing something positive about it, I was warned by a few people that this was a Bad Idea. I was focusing purely on the technical side of it and how closely it implemented a concept called liquid software, but from what people were saying, it seemed like a creation that was spoiled by something outside of it, specifically the creator's political views (of which I had little idea at the time).

As much as I will probably return to the original concept in the future with another post, this feels like something I had to address first.

DISCLAIMER: This post references to projects and people that the mainstream considers controversial. This post is not an approval of these people's views. I am focusing purely on the aspect of how this correlates into how art is perceived, recognized and able to be admired. I realize that the people behind the projects I have cited have said things that if taken seriously at a societal level could hurt me and people like me. That is not the point of this; I am trying to learn how this art works so I can create my own in the future. If this is uncomfortable for you at any point, please close this browser tab and do something else.


So, what is art?

This is a surprisingly hard question to answer. Most of the time though, I know art when I see it.

Art doesn't have to follow conventional ideas of what most people think "art" is. Art can be just about anything that you can classify as art. As a conventional example, consider something like the Mona Lisa:

The Mona Lisa, the most famous painting in the world

People will accept this as art without much argument. It's a painting, it obviously took a lot of skill and time to create. It is said that Leonardo Da Vinci (the artist of the painting) created it partially as a contribution to the state of the art of oil painting.

So that painting is art, and a lot of people would consider it art; so what would a lot of people not consider art? Here's an example:

Untitled (Perfect Lovers) by Felix Gonzalez-Torres

This is Untitled (Perfect Lovers) by Felix Gonzalez. If you just take a look at it without context, it's just two battery-operated clocks on a wall. Where is the expertise and the like that goes into this? This is just the result of someone buying two clocks from the store and putting them somewhere, right?

Let's dig into the description of the piece:

Initially set to the same time, these identical battery-powered clocks will eventually fall out of sync, or may stop entirely. Conceived shortly after Gonzalez-Torres’s partner was diagnosed with AIDS, this work uses everyday objects to track and measure the inevitable flow of time. When one of the clocks stops or breaks, they can both be reset, thereby resuming perfect synchrony. In 1991, Gonzalez-Torres reflected, “Time is something that scares me. . . or used to. This piece I made with the two clocks was the scariest thing I have ever done. I wanted to face it. I wanted those two clocks right in front of me, ticking.”

And after reading that description, it's impossible for me to say this image is not art. Even though it's made up of ordinary objects, the art comes out in the way that the clocks' eventual death relates to the eventual death of the author and their partner.

This art may be located on the fringes of what people consider "art". So what else is on the fringes?

Outsider Art

For there to be "fringes" to the art landscape, there must be an "inside" and "outside" to it. In particular, the "outsider" art usually (but not always) contains elements and themes that are outside of the mainstream. Outsiders are therefore more free to explore ideas, concepts and ways of expression that defy cultural, spiritual or other norms. Logically, every major art style you know and love started as outsider art, before it was cool. Memes are also a form of outsider art, though they are gradually being accepted into the mainstream.

It's very easy to find outsider art if you are looking for it: just fish for some on Twitter, 4chan or Reddit; you'll find plenty of artists there who are placed firmly outside of the mainstream art community.

Computer Science

Computer science is a kind of art. It's the art of turning contextual events into effects and state. It's also the art of creating solutions for problems that could never be solved before. It's also the science of how to connect millions of people across common protocols and abstractions that they don't have to understand in order to use.

This is an art that connects millions and has shaped itself into an industry of its own. This art, like the rest of mainstream art, keeps evolving, growing and changing into something new; into a more ultimate and detailed expression of what it can be, as people explore the ways it can be created and presented. This art is also quite special because it's not very limited by physical objects or expressions in material space. It's an art that can evolve and change with the viewer.

But, since this is an art, there's still an inside and an outside. Things on the inside are generally "safe" for people to admire, use and look at. The inside contains things like Linux, Docker, Kubernetes, Intel, C, Go, PHP, Ruby and other well-known and battle-proven tools.

The Outside

The outside, however, is where the real innovation happens. The outside is where people can really take a more critical look at what computing is, does or can be. These views can add up into fundamentally different ways of looking at computer science, much like changing a pair of glasses for another changes how you see the world around you.

As an example, consider TempleOS. It's a work of outsider art by Terry Davis (1969-2018, RIP), but it's also a fully functional operating system. It has a custom-built kernel, compiler, toolchain, userland, debugger, games, and documentation system, each integrated into everything else, in ways that could realistically not be done with how mainstream software is commonly developed.

Urbit is another example of this. It's a fundamentally different way of looking at networked computing. Everything in Urbit is seamlessly interlinked with everything else to the point that it can be surprising that a file you are working with actually lives on another computer. It implements software updates as invisible to the user. It allows for the model of liquid software, or updates to a program flowing into user's computers without the users having to care about the updates. Users don't even notice the downtime.

As yet another example, consider Minecraft. As of the writing of this article, it is the video game with the most copies sold in human history. It is an open world block building game where the limits of what you can make are the limits of your imagination. It has been continuously updated, refined and improved from a minimal proof of concept into the game it is today.

The Seam

Consider this quote that comes into play a lot with outsider art:

Genius and insanity are differentiated only by context. One person's genius is another person's insanity.

  • Anonymous

These three projects are developed by people whom the mainstream has cast out. Terry Davis' mental health issues and delusions about hearing the voice of God have tainted TempleOS to be that "weird bible OS" to the point where people completely disregard it. Urbit was partially created by a right-wing reactionary (Curtis Yarvin). He has been so ostracized that he cannot publicly talk about his work to the kind of people that would most directly benefit from learning about it. Curtis isn't even involved with Urbit anymore, and his name is still somehow an irrevocable black mark on the entire thing. Minecraft was initially created by Notch, who recently had intro texts mentioning his name removed from the game after he said questionable things about transgender people.


This "irrevocable" black mark has a name: Anathema. It refers to anything that is shunned by the mainstream. Outsiders that create outsider art may or may not be anathema to their respective mainstreams. This turns the art into a taboo, a curse, a stain. People no longer see an anathema as the art it is, but merely the worthless product of someone that society would probably rather forget if it had the chance.

I don't really know how well this sits with me, personally. Outsiders have unique views of the world that can provide ideas that ultimately strengthen us all. Society's role is to disseminate mainstream improvements to large groups, but real development happens at the personal level.

Does one bad apple really spoil the sociological bunch? Why does this happen? Have the political divides gotten so deeply entrenched into society that people really become beyond reproach? Isn't this a recursive trap? How does someone redeem themselves to no longer be an anathema? Is it possible for people who are anathema to redeem themselves? Why or why not? Is there room for forgiveness, or does the original sin doom the sinner eternally, much like it has to Catholicism?

Are the creations of an anathema outsider artist still art? Are they still an artist even though they become unable to share their art with others?

I don't know. These are hard questions. I don't really have much of a conclusion here. I don't want to seem like I'm trying to prescribe a method of thinking here. I'm just sitting on the side and spouting out ideas to inspire people to think for themselves.

I'm just challenging you, the reader, to really think about what/who is and is not an anathema in your day-to-day life. Identify them. Understand where/who they are. Maybe even apply some compassion and attempt to understand their view and how they got there. I'm not saying to put yourself in danger, but just to be mindful of it.

Be well.

Special thanks to CelestialBoon, Grapz and MoonGoodGryph for proofreading and helping with this post. This would be a very different article without their feedback and input.

Don't Look Into the Light

Permalink - Posted on 2019-10-06 00:00

Don’t Look Into the Light

So at a previous job I was working at, we maintained a system. This system powered a significant part of the core of how the product was actually used (as far as usage metrics reported). Over time, we had bolted something onto the side of this product to take actions based on the numbers the product was tracking.

After a few years of cycling through various people, this system was very hard to understand. Data would flow in on one end, go to an aggregation layer, then get sent to storage and another aggregation layer, and then eventually all of the metrics were calculated. This system was fairly expensive to operate and it was stressing the datastores it relied on beyond what other companies called theoretical limits. Oh, to make things even more fun; the part that makes actions based on the data was barely keeping up with what it needed to do. It was supposed to run each of the checks once a minute and was running all of them in 57 seconds.

During a planning meeting we started to complain about the state of the world and how godawful everything had become. The undocumented (and probably undocumentable) organic nature of the system had gotten out of hand. We thought we could kill two birds with one stone and wanted to subsume another product that took action based on data, as well as create a generic platform to reimplement the older action-taking layer on top of.

The rules were set, the groundwork was laid. We decided:

  • This would be a Big Rewrite based on all of the lessons we had learned from the past operating the behemoth
  • This project would be future-proof
  • This project would have 75% test coverage as reported by CI
  • This project would be built with a microservices architecture

Those of you who have been down this road before probably have massive alarm bells going off in your head. This is one of those things that looks like a good idea on paper, can probably be passed off as a good idea to management and actually implemented; as happened here.

So we set off on our quest to write this software. The repo was created. CI was configured. The scripts were optimized to dump out code coverage as output. We strived to document everything on day 1. We took advantage of the datastore we were using. Everything was looking great.

Then the product team came in and noticed fresh meat. They soon realized that this could be a Big Thing to customers, and they wanted to get in on it as soon as possible. So we suddenly had our deadlines pushed forward and needed to get the whole thing into testing yesterday.

We set it up, set a trigger for a task, and it worked in testing. After a while of it consistently doing that with the continuous functional testing tooling, we told product it was okay to have a VERY LIMITED set of customers have at it.

That was a mistake. It fell apart the second customers touched it. We struggled to understand why. We dug into the core of the beast we had just created and managed to discover we made critical fundamental errors. The heart of the task matching code was this monstrosity of a cross join that took the other people on the team a few sheets of graph paper to break down and understand. The task execution layer worked perfectly in testing, but almost never in production.

And after a week of solid debugging (including making deals with other teams, satan, jesus and the pope to try and understand it), we had made no progress. It was almost as if there was some kind of gremlin in the code that was just randomly making things not fire if it wasn’t one of our internal users triggering it.

We had to apologize with the product team. Apparently the a lot of product team had to go on damage control as a result of this. I can only imagine the trickled-down impact this had on other projects internal to the company.

The lesson here is threefold. First, the Big Rewrite is almost a sure-fire way to ensure a project fails. Avoid that temptation. Don’t look into the light. It looks nice, it may even feel nice. Statistically speaking, it’s not nice when you get to the other side of it.

The second lesson is that making something microservices out of the gate is a terrible idea. Microservices architectures are not planned. They are an evolutionary result, not a fully anticipated feature.

Finally, don’t “design for the future”. The future hasn’t happened yet. Nobody knows how it’s going to turn out. The future is going to happen, and you can either adapt to it as it happens in the Now or fail to. Don’t make things overly modular, that leads to insane things like dynamically linking parts of an application over HTTP.

If you 'future proof' a system you build today, chances are when the future arrives the system will be unmaintainable or incomprehensible.
- John Murphy

This kind of advice is probably gonna feel like a slap to the face to a lot of people. People really put their heart into their work. It feeds egos massively. It can be very painful to have to say no to something someone is really passionate about. It can even lead to people changing their career plans depending on the person.

But this is the truth of the matter as far as I can tell. This is generally what happens during the Big Rewrite centred around Best Practices for Cloud Native software.

The most successful design decisions are wholly and utterly subjective to every kind of project you come across. What works in system A probably won’t work perfectly in system B. Everything is its own unique snowflake. Embrace this.

Compile Stress Test

Permalink - Posted on 2019-10-03 00:00

This is an experiment in blogging. I am going to be putting my tweets and select replies one after another without commentary.

Meanwhile the same thing in Go took 5 minutes and I was able to run it on my desktop instead of having to rent a server from AWS.

The Cheese Dream

Permalink - Posted on 2019-10-01 00:00

The Cheese Dream

I wake up on a bed I've never seen before. I look up at the white sky. Wait, the white sky? I look down at my blanket and it has a very weird, but distinct smell. Is it cheese? I break a part of it off and taste it. My blanket is made out of cheese. I feel around the bed and it feels slightly pliable, almost like the bed is made out of cheese too. I take off the blanket, tearing huge holes in it in the process.

I try to lean up but there's something pulling between my shoulders when I do. With some force I hear a slight sucking and popping noise. My dorsal fin (I have a dorsal fin?) was stuck in the cheese bed. This is odd, what the heck is going on.

I get up and open the cheese drawer, at least my clothes aren't cheese too. I put them on and take a few deep breaths. This is gonna be an experience.

I was looking around and I saw a field of mozzarella with cheese sticks for grass. There's a molten cheese river with a cheese bridge crossing it, with a cheese town in the distance. There's a cheese path at my feet leading to the cheese bridge. I call out to see if anyone is there, there isn't anyone there.

I walk down the cheese path smelling the light scent of the cheese river in the distance. Every time I take a step I leave a footprint in the cheese path, it slowly reforms back into place after I stand on it.

When I got closer to the cheese town, there was a person made out of cheese crying while sitting on a cheese bench. I look down at him and ask him why he's crying. He looks up at me and says "Our town is being threatened by the gravy monster! Please, please help us! Let me take you to elder Fromage to get more information!" He then got up, grabbed my hand gently and led me to the center of the cheese town, where the elder lived.

When we got to the elder's house, he looked up at me. "So, Bleu here tells me you're willing to help us fight the scourge of our town, the gravy monster. Can you help us? We've lost so many people, we barely have enough left to sustain ourselves."

I'm still processing everything that is going on though. This was a cheese house, with everything in it made out of various kinds of cheese. There's somehow a fire roaring in the cheese fireplace. What the actual hell? The cheese elder looks up at me pleadingly saying "Hello? You there?"

I shake my head and reply "Yes, I can help you defeat the gravy monster."

I mean, let's be honest, it's not like there's anything better to do at the moment.

The elder is elated. "Hooray! We might just finally have a chance to sustain our way of life!"

"But, how should I help you defeat the gravy monster?"

"All monsters have a weakness. Here, take this key. It leads to a shed just outside my house, there you will find the tools you need to defeat the gravy monster. Hurry! He always attacks during the mid-day and it's almost time!"

I'm still kind of dumbstruck by this whole experience, so I take the cheese key, thank the elder and head out to his shed with Bleu leading me.

We arrive at the cheese shed and I put the cheese key into the cheese lock. The cheese lock opens and lets us into the cheese shed. There are two things in it: a rather large bowl (too large for it to be inside the cheese shed somehow) and something I can't quite identify. Bleu is taken aback when he sees it. I ask him what he sees and he replies: "It's the sacred fries! They're only pulled out during emergencies!"

"Where is the gravy monster going to attack from again?"

"Down the brown path, take the stuff and come with me!"

So I put the stuff in my shoulder bag of holding and follow Bleu across the cheese town to the cheese path that has been stained brown with gravy. I have an idea.

"Bleu, do you have a shovel?"

"Oh, yeah! Let me go get it, I'll be right back!"

He comes back with his shovel and I start digging a hole in the cheese to put the bowl in. I fill the bowl about halfway with the sacred fries. The gravy monster can be heard in the distance.


I'm kind of awestruck again. It looks like the black goo monster from Star Trek, but it's brown. It's a monster made out of gravy. I quickly hide behind a cheese bush and grab some curds from it.

The monster slowly ambles up the cheese path, partially melting it as it steals forward towards my trap.

It sees a weird part of the cheese path. It gets confused. "What is this? I've never seen the path bend down like this before".

I poke Bleu from inside the cheese bush. "Now's our chance. Stand a few feet on the other side of the bowl. He'll try to chase after you and get trapped."

Bleu looked at me like I had lobsters crawling out of my eyes. "What?"

"No, seriously, watch."


Bleu jumps out of the cheese bush and stands ahead of the monster. The monster laughs. "I knew I'd find you, cheeseling! You'll pair nicely with my wine at home. Now be a good cheeseling and come back with me to my home!"

"No! You'll have to grab me yourself!"

The gravy monster lets out a roar and attempts to run towards Bleu and grab him. This would have worked if the monster didn't fall into the bowl and on top of the sacred fries.

The gravy monster lets out a cry in pain. He didn't expect the fries to be this absorbent. The fries are absorbing the flavor of the gravy monster.

I jump out of the bush and throw a mix of cheese curds and fries on top of the monster, with each handful the gravy monster gets quieter and quieter. Then everything got really still and quiet. The sound of the cheese river was audible again.

Bleu, looking like he just soiled his cheese pants, was elated. "You did it!~ You saved the town!~ You're a hero!~"

I took a minute to re-evaluate what had happened. I just saved a town by making poutine? What? Just, what?

I grabbed another bowl out of my bag and served myself some poutine, it was some of the best poutine I've ever had.

"Hey, this is pretty good Bleu, have some!"

Bleu took a bite and his cheese eyes went wide open. Without another word, he ran towards the cheese town to tell the people of the feast waiting for them. He came back with the remainder of the town and we had a feast of poutine, declaring the day "Poutine Day" for the rest of time.

After some time celebrating, I woke up in my bed. I was really confused and having trouble processing what had just happened to me. I was also craving poutine.

Based on this twitter thread.


Permalink - Posted on 2019-09-22 00:00


I've been working on a project in the Conlang Critic Discord with some friends for a while now, and I'd like to summarize what we've been doing and why here. We've been working on creating a constructed language (conlang) with the end goal of each of us going off and evolving it in our own separate ways. Our goal in this project is really to create a microcosm of the natural process of language development.


One of the questions you, as the reader, might be asking is "why?" To which I say "why not?" This is a tool I use to define, explore and challenge my fundamental understanding of reality. I don't expect anything I do with this tool to be useful to anyone other than myself. I just want to create something by throwing things at the wall and seeing what makes sense for me. If other people like it or end up benefitting from it I consider that icing on the cake.

A language is a surprisingly complicated thing. There's lots of nuance and culture encoded into it, not even counting things like metaphors and double-meanings. Creating my own languages lets me break that complicated thing into its component parts, then use that understanding to help increase my knowledge of natural languages.

So, like I mentioned earlier, I've been working on a conlang with some friends, and here's what we've been creating.

mapatei grammar

mapatei is the language spoken by a primitive culture of people we call maparaja (people of the language). It is designed to be very simple to understand, speak and learn.


The phonology of mapltapei is simple. It has 5 vowels and 17 consonants. The sounds are written mainly in International Phonetic Alphabet.


The vowels are:

International Phonetic Alphabet Written as Description / Bad Transcription for English speakers
a a unstressed "ah"
ā stressed "AH"
e e unstressed "ayy"
ē stressed "AYY"
i i unstressed "ee"
ī stressed "EE"
o o unstressed "oh"
ō stressed "OH"
u u unstressed "ooh"
ū stressed "OOH"

The long vowels (anything with the funny looking bar/macron on top of them) also mark for stress, or how "intensely" they are spoken.


The consonants are:

International Phonetic Alphabet Written as Description / Bad Transcription for English speakers
m m the m in mother
n n the n in money
ᵐb mb a combination of the m in mother and the b in baker
ⁿd nd as in handle
ᵑg ng as in finger
p p the p in spool
t t the t in stool
k k the k in school
ph the ph in pool
th the th in tool
kh the kh in cool
ɸ~f f the f in father
s s the s in sock
w w the w in water
l l the l in lie
j j or y the y in young
r~ɾ r the r in rhombus

Word Structure

The structure of words is based on syllables. Syllables are formed of a pair of maybe a consonant and always a vowel. There can be up to two consecutive vowels in a word, but each vowel gets its own syllable. If a word is stressed, it can only ever be stressed on the first syllable.

Here are some examples of words and their meanings (the periods in the words mark the barriers between syllables):

mapatei word Intentional Phonetic Alphabet Meaning
ondoko o.ⁿdo.ko pig
māo maː.o cat
ameme a.me.me to kill/murder
ero e.ro can, to be able to
ngōe ᵑgoː.e I/me
ke ke cold
ku ku fast

There are only a few parts of speech: nouns, pronouns, verbs, determiners, numerals, prepositions and interjections.


Nouns describe things, people, animals, animate objects (such as plants or body parts) and abstract concepts (such as days). Nouns in mapatei are divided into four classes (this is similar to how languages like French handle the concept of grammatical gender): human, animal, animate and inanimate.

Here are some examples of a few nouns, their meaning and their noun class:

mapatei word International Phonetic Alphabet Class Meaning
okha o.kʰa human female human, woman
awu a.wu animal dog
fōmbu (ɸ~f)oː.ᵐbu animate name
ipai i.pa.i inanimate salt

Nouns can also be singular or plural. Plural nouns are marked with the -ja suffix. See some examples:

singular mapatei word plural mapatei word International Phonetic Alphabet Meaning
ra raja ra.ja person / people
meko mekoja me.ko.ja ant / ants
kindu kinduja kiː.ⁿdu.ja liver / livers
fīfo fīfoja f)iː.(ɸf)o.ja moon / moons


Pronouns are nouns that replaces a noun or noun phrase with a special meaning. Examples of pronouns in English are words like I, me, or you. This is to avoid duplication of people's names or the identity of the speaker vs the listener.

Pronouns singular plural Rough English equivalent
1st person ngōe tha I/me, we
2nd person sīto khē you, y'all
3rd person human foli he/she, they
3rd person animal mi wāto they
3rd person animate sa wāto they
3rd person inanimate li wāto they


Verbs describe actions, existence or occurrence. Verbs in mapatei are conjugated in terms of tense (or when the thing being described has/will happen/ed in relation to saying the sentence) and the number of the subject of the sentence.

Verb endings:

Verbs singular plural
past -fu -phi
present -ja
future māu $verb māu $verb-ja

For example, consider the verb ōwo (oː.wo) for to love:

ōwo - to love singular plural
past ōwofu ōwophi
present ōwo ōwoja
future māu ōwo māu ōwoja


Determiners are words that can function both as adjectives and adverbs in English do. A determiner gives more detail or context about a noun/verb. Determiners follow the things they describe, like French or Toki Pona. Determiners must agree with the noun they are describing in class and number.

Determiners singular plural
human -ra -fo
animal -mi -wa
animate -sa -to
inanimate -li -wato

See these examples:

a big human: ra sura

moving cats: māoja wuwa

a short name: fōmbu uwiisa

long days: lundoseja khāngandiwato

Also consider the declensions for uri (u.ri), or dull

uri singular plural
human urira urifo
animal urimi uriwa
animate urisa urito
inanimate urili uriwato


There are two kinds of numerals in mapltatei, cardinal (counting) and ordinal (ordering) numbers. Numerals are always in seximal.

cardinal (base 6) mapatei
0 fangu
1 āre
2 mawo
3 piru
4 kīfe
5 tamu
10 rupe
11 rupe jo āre
12 rupe jo mawo
13 rupe jo piru
14 rupe jo kīfe
15 rupe jo tamu
20 mawo rupe
30 piru rupe
40 kīfe rupe
50 tamu rupe
100 theli

Ordinal numbers are formed by reduplicating (or copying) the first syllable of cardinal numbers and decline similarly for case. Remember that only the first syllable can be stressed, so any reduplicated syllable must become unstressed.

ordinal (base 6) mapatei
0th fangufa
1st ārea
2nd mawoma
3rd pirupi
4th kīfeki
5th tamuta
10th ruperu
11th ruperu jo ārea
12th ruperu jo mawoma
13th ruperu jo pirupi
14th ruperu jo kīfeki
15th ruperu jo tamuki
20th mawoma ruperu
30th pirupi ruperu
40th kīfeki ruperu
50th tamuta ruperu
100th thelithe

Cardinal numbers are optionally declined for case when used as determiners with the following rules:

Numeral Class suffix
human -ra
animal -mi
animate -sa
inanimate -li

Numeral declension always happens last, so the inanimate nifth (seximal 100 or decimal 36) is thelitheli.

Here's a few examples:

three pigs: ondoko pirumi

the second person: ra mawomara

one tree: kho āremi

the nifth day: lundose thelitheli


Prepositions mark any other details about a sentence. In essence, they add information to verbs that would otherwise lack that information.

fa: with, adds an auxiliary possession to a sentence

ri: possession, sometimes indicates ownership

I eat with my wife: wā ngōe fa epi ri ngōe

ngi: the following phrase is on top of the thing being described

ka: then (effect)

ēsa: if/whether

If I set this dog on the rock, then the house is good: ēsa adunga ngōe pā āwu ngi, ka iri sare eserili


Interjections have the following meanings:

Usually they act like vocatives and have free word order. As a determiner they change meta-properties about the noun/verb like negation.

wo: no, not

English mapatei
No! Don't eat that! wo! wā wo ūto
I don't eat ants wā wo ngōe mekoja

Word Order

mapltapei has a VSO word order for sentences. This means that the verb comes first, followed by the subject, and then the object.

English mapatei gloss
the/a child runs kepheku rako kepheku.VERB rako.NOUN.human
The child gave the fish a flower indofu rako ora āsu indo.VERB.past rako.NOUN.human ora.NOUN.animal āsu.NOUN.animate
I love you ōwo ngōe sīto ōwo.VERB ngōe.PRN sīto.PRN
I do not want to eat right now wā wo ngōe oko mbeli wā.VERB wo.INTERJ ngōe.PRN oko.PREP mbeli.DET.singular.inanimate
I have a lot of love, and I'm happy about it urii ngōe erua fomboewato, jo iri ngōe phajera lo li urii.VERB ngōe.PRN eruaja.NOUN.plural.inanimate fomboewato.DET.plural.inanimate, jo.CONJ iri.VERB ngōe.PRN phajera.DET.singular.human lo.PREP li.PRN
The tree I saw yesterday is gone now pōkhufu kho ngōe, oko iri māndosa mbe pōkhu.VERB.past kho.NOUN.animate ngōe.PRM, oko.PREP iri.VERB māndo.DET.animate mbe.PRN


As I mentioned earlier, I've been working on some code here to handle things like making sure words are valid. This includes a word validator which I am very happy with.

Words are made up of syllables, which are made up of letters. In code:

  Letter* = object of RootObj
    case isVowel*: bool
    of true:
      stressed*: bool
    of false: discard
    value*: string

  Syllable* = object of RootObj
    consonant*: Option[Letter]
    vowel*: Letter
    stressed*: bool

  Word* = ref object
    syllables*: seq[Syllable]

Letters are parsed out of strings using this code. It's an interator, so users have to manually loop over it:

import unittest
import mapatei/letters

let words = ["pirumi", "kho", "lundose", "thelitheli", "fōmbu"]

suite "Letter":
  for word in words:
    test word:
      for l in word.letters:
       discard l

This test loops over the given words (taken from the dictionary and enlightening test cases) and makes sure that letters can be parsed out of them.

Next, syllables are made out of letters, so syllables are parsed using a finite state machine with the following transition rules:

Present state Next state for vowel Next state for consonant Next state for end of input
Init Vowel/stressed Consonant Illegal
Consonant Vowel/stressed End Illegal
Vowel End End End

Some other hacking was done in the code, but otherwise it is a fairly literal translation of that truth table.

And finally we can check to make sure that each word only has a head-initial stressed syllable:

type InvalidWord* = object of Exception

proc parse*(word: string): Word =
  var first = true
  result = Word()

  for syll in word.syllables:
    if not first and syll.stressed:
      raise newException(InvalidWord, "cannot have a stressed syllable here")
    if first:
      first = false
    result.syllables.add syll

And that's enough to validate every word in the dictionary. Future extensions will include automatic conjugation/declension as well as going from a stream of words to an understanding of sentences.

Useful Resources Used During This

Creating a language from scratch is surprisingly hard work. These resources helped me a lot though.

Thanks for reading this! I hope this blogpost helps to kick off mapatei development into unique and more fleshed out derivative conlangs. Have fun!

Special thanks to jan Misali for encouraging this to happen.

When Then Zen: Wonderland Immersion

Permalink - Posted on 2019-09-12 00:00

When Then Zen: Wonderland Immersion

Wonderland immersion is a topic that has interested me for years. I have only recently started to get better at it, and I would like to document the methods I have been using for this. A wonderland (blame someone named Alice for that name) is a mental world, but more persistent than usual "imagination". It can be as alive or as dead as you want. My wonderland has a rather large (40km x 40km) island on it that is full of varied locales.

At a high level, the approach I am using for this is based on philosophical metaphysical analysis, or in short answering two questions for the world and various things in it:

  1. What is there?
  2. What is it like?

The method I have found for doing this fairly repeatably is a combination of two techniques I have found elsewhere:

  • 5 senses visualization for the scene you are in to ground yourself
  • Semantic feature analysis for randomly selected items from that visualization

As an example, consider this. This kind of detail is what you'd be looking for.

Breaking it down further though, let's consider a scene where you are sitting at a table in a cold, metal chair.

Five Senses Visualization

The five senses visualization for this could look something like

  • 5 things you can see
    • The table
    • The salt and pepper shakers on the table
    • The plate in front of me
    • My reflection in the plate
    • The empty chair in front of me
  • 4 things you can touch
    • Silverware
    • Napkin dispenser
    • Your phone on the table
    • Empty water glass
  • 3 things you can hear
    • Other people in the restaurant
    • The cooks in the distance
    • The door opening and closing occasionally, making the bell ring to let waitstaff know someone needs to be seated
  • 2 things you can smell
    • Baked chicken from the kitchen
    • Grilled salmon from the next table over
  • 1 thing you can taste
    • The soda in my mouth

Semantics Analysis

Group, Use, Action, Properties, Location, Association

A lot of the group categorization depends on your own personal philosophical outlooks. If you are unsure how to assign a group, start by using the most generic adjective possible to describe it.

The salt and pepper shakers

Group: thing, container of smaller things, but a thing made up of two parts and smaller things
Use: contains spices, these are used to flavor food with common mild flavorings
Action: No inherent action unless acted upon, normally shaken to maximize the amount of seasoning added to the dish in question
Properties: palmable, makes a noise when you shake them, light, small, easy to manipulate, easy to refill if needed
Location: The table in front of me, it doesn't make sense for these food containers to be elsewhere
Association: togetherness, memories of Blues Clues having the salt and pepper characters married, my mother collecting salt and pepper shakers

Plate in front of me

Group: thing
Use: holds food as a staging area for being eaten
Action: no inherent action, but can break into shards that can cut badly
Properties: ceramic, white, flat, circular
Location: the table in front of me, the kitchen dishwasher, staging for waitstaff
Association: food is coming, but patience is required

If you want to really train wonderland immersion, I suggest doing at least one of these full descriptions per day. Doing more will help you progress "faster" (if that is what you desire for whatever reason). Don't overstimulate or overwhelm yourself. It can be intense the first few times, but it gets easier over time. I personally do them before I go to sleep or just after I wake up, I have found those times are the most free and it is easiest to make myself alone during them. Learning how to do this in public or around other people may be desirable based on the circumstances of your life situation. Be smart, don't do this when you are otherwise distracted or busy.

Something that may help is to keep in mind how long it takes to walk to different places as you walk around your daily life. See how long it takes to go across the street, or from the street corner to a store, etc. You can use these rough estimates to help you better scale places in your world.

I would suggest setting calendar reminders for doing it at least once a day, depending on when fits best into your daily schedule. Remember that if a machine remembers it for you, you don't forget to do it (as easily) because the machine reminds you about it. Be sure to set your calendar reminder to trigger after nightly do-not-disturb mode if relevant.

Don't be afraid to use tools like a meditation timer to limit your sessions doing this, especially if you are feeling like you need to 'get back', are 'missing out' or neglecting external duties. If you are using a calendar app to schedule the time, then set your meditation timer for the length of the event. Thirty minutes is a good place to start with, but adjust this number as things change for you.

I hope this can help. Take the numbers and sense ordering as suggestions and please do experiment around with what sense gets at least how many entries. Play around with this, it is your imaginary world after all. I suggest doing semantic feature analysis on at least three items per visualization session. If you need a place to blog about it, I suggest write.as. If you have questions, feel free to contact me and ask away. I'm happy to help when I can.

Be well, Creator.

This is a slightly edited version of this article.

The Cult of Kubernetes

Permalink - Posted on 2019-09-07 00:00

The Cult of Kubernetes

or: How I got my blog onto it with autodeployment via GitHub Actions.

The world was once a simple place. Things used to make sense, or at least there weren't so many layers that it became difficult to tell what the hell is going on.

Then complexity happened. This is a tale of how I literally recreated this meme:

This is how I deployed my blog (the one you are reading right now) to Kubernetes.

The Old State of the World

Before I deployed my blog to Kubernetes, I used Dokku, as I had been for years. Dokku is great. It emulates most of the Heroku "git push; don't care" workflow, but on your own server that you can self-manage.

This is a blessing and a curse.

The real advantage of managed services like Heroku is that you literally just HAND OFF operations to Heroku's team. This is not the case with Dokku. Unless you pay someone a lot of money, you are going to have to manage the server yourself. My dokku server was unmanaged, and I run many apps on it (this listing was taken after I started to move apps over):

=====> My Apps

This is enough apps (plus 5 more that I've already migrated) that it really doesn't make sense paying for something like Heroku; nor does it really make sense to use the free tier either.

So, I decided that it was time for me to properly learn how to Kubernetes, and I set off to create a cluster via DigitalOcean managed Kubernetes.

The Cluster

I decided it would be a good idea to create my cluster using Terraform, mostly because I wanted to learn how to use it better. I use Terraform at work, so I figured this would also be a way to level up my skills in a mostly sane environment.

I have been creating and playing with a small Terraform wrapper tool called dyson. This tool is probably overly simplistic and is written in Nim. With the config in ~/.config/dyson/dyson.ini, I can simplify my Terraform usage by moving my secrets out of the Terraform code directly. I also avoid having my API tokens exposed in my shell to avoid accidental exposure of the secrets.

Dyson is very simple to use:

$ dyson
  dyson {SUBCMD}  [sub-command options & parameters]
where {SUBCMD} is one of:
  help         print comprehensive or per-cmd help
  apply        apply Terraform code to production
  destroy      destroy resources managed by Terraform
  env          dump envvars
  init         init Terraform
  manifest     generate a somewhat sane manifest for a kubernetes app based on the arguments.
  plan         plan a future Terraform run
  slug2docker  converts a heroku/dokku slug to a docker image

dyson {-h|--help} or with no args at all prints this message.
dyson --help-syntax gives general cligen syntax help.
Run "dyson {help SUBCMD|SUBCMD --help}" to see help for just SUBCMD.
Run "dyson help" to get *comprehensive* help.

So I wrote up my config:

# main.tf
provider "digitalocean" {}

resource "digitalocean_kubernetes_cluster" "main" {
  name    = "kubermemes"
  region  = "${var.region}"
  version = "${var.kubernetes_version}"

  node_pool {
    name       = "worker-pool"
    size       = "${var.node_size}"
    node_count = 2
# variables.tf
variable "region" {
  type    = "string"
  default = "nyc3"

variable "kubernetes_version" {
  type    = "string"
  default = "1.15.3-do.1"

variable "node_size" {
  type    = "string"
  default = "s-1vcpu-2gb"

and ran it:

$ dyson plan
<... many lines of plan output ...>
$ dyson apply
<... many lines of apply output ...>

Then I had a working but mostly unconfigured Kubernetes cluster.


This is where things started to go downhill. I wanted to do a few things with this cluster so I could consider it "ready" for me to use for deploying applications to.

I wanted to do the following:

After a lot of trial, error, pain, suffering and the like, I created this script which I am not pasting here. Look at it if you want to get a streamlined overview of how to set these things up.

Now that all of this is set up, I can deploy an example app with a manifest that looks something like this:

apiVersion: v1
kind: Service
  name: hello-kubernetes-first
    external-dns.alpha.kubernetes.io/hostname: exanple.within.website
    external-dns.alpha.kubernetes.io/ttl: "120" #optional
    external-dns.alpha.kubernetes.io/cloudflare-proxied: "false"
  type: ClusterIP
  - port: 80
    targetPort: 8080
    app: hello-kubernetes-first

apiVersion: apps/v1
kind: Deployment
  name: hello-kubernetes-first
  replicas: 1
      app: hello-kubernetes-first
        app: hello-kubernetes-first
      - name: hello-kubernetes
        image: paulbouwer/hello-kubernetes:1.5
        - containerPort: 8080
        - name: MESSAGE
          value: Henlo this are an exanple deployment

apiVersion: extensions/v1beta1
kind: Ingress
  name: hello-kubernetes-ingress
    kubernetes.io/ingress.class: nginx
    certmanager.k8s.io/cluster-issuer: "letsencrypt-prod"
  - hosts:
    - exanple.within.website
    secretName: prod-certs
  - host: exanple.within.website
      - backend:
          serviceName: hello-kubernetes-first
          servicePort: 80

It was about this time when I wondered if I was making a mistake moving off of Dokku. Dokku really does a lot to abstract almost everything involved with nginx away from you, and it really shows.

However, as a side effect of everything being so declarative and Kubernetes really not assuming anything, you have a lot more freedom to do basically anything you want. You don't have to have specially magic names for tasks like web or worker like you do in Heroku/Dokku. You just have a deployment that belongs to an "app" that just so happens to expose a TCP port that just so happens to have a correlating ingress associated with it.

Lucky for me, most of the apps I write fit into that general format, and the ones that don't can mostly use the same format without the ingress.

So I templated that sucker as a subcommand in dyson. This lets me do commands like this:

$ dyson manifest \
      --name=hlang \
      --domain=h.christine.website \
      --dockerImage=docker.pkg.github.com/xe/x/h:v1.1.8 \
      --containerPort=5000 \
      --replicas=1 \
      --useProdLE=true | kubectl apply -f-

And the service gets shunted into the cloud without any extra effort on my part. This also automatically sets up Let's Encrypt, DNS and other things that were manual in my Dokku setup. This saves me time for when I want to go add services in the future. All I have to do is create a docker image somehow, identify what port should be exposed, give it a domain name and number of replicas and just send it on its merry way.

GitHub Actions

This does however mean that deployment is no longer as simple as "git push; don't care". This is where GitHub Actions come into play. They claimed to have the ability to run full end-to-end CI/CD on my applications.

I have been using them for a while for CI on my website and have been pleased with them, so I decided to give it a try and set up continuous deployment with them.

As the commit log for the deployment manifest can tell, this took a lot of trial and error. One of the main sources of problems here was that GitHub Actions had recently had a lot of changes made to configuration and usage as compared to when it was in private beta. This included changing the configuration schema from HCL to YAML.

Of course, all of the documentation (outside of GitHub's quite excellent documentation) was out of date and wrong. I tried following a tutorial by DigitalOcean themselves on how to do this exact thing I wanted to do, but it referenced the old HCL syntax for GitHub Actions and did not work. To make things worse, examples in the marketplace READMEs simply DID NOT WORK because they were written for the old GitHub Actions syntax.

This was frustrating to say the least.

After trying to make them work anyways with a combination of the "Use Latest Version" button in the marketplace, prayer and gratuitous use of the with.args field in steps I gave up and decided to manually download the tools I needed from their upstream providers and execute them by hand.

This is how I ended up with this monstrosity:

- name: Configure/Deploy/Verify Kubernetes
  run: |
    curl -L https://github.com/digitalocean/doctl/releases/download/v1.30.0/doctl-1.30.0-linux-amd64.tar.gz | tar xz
    ./doctl auth init -t $DIGITALOCEAN_ACCESS_TOKEN
    ./doctl kubernetes cluster kubeconfig show kubermemes > .kubeconfig

    curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
    chmod +x kubectl
    ./kubectl --kubeconfig .kubeconfig apply -n apps -f deploy.yml
    sleep 2
    ./kubectl --kubeconfig .kubeconfig rollout -n apps status deployment/christinewebsite

I am almost certain that I am doing it wrong here, I don't know how robust this is and I'm very sure that this can and should be done another way; but this is the only thing I could get working (for some definition of "working").

EDIT: it got fixed, see below

Now when I git push things to the master branch of my blog repo, it will automatically get deployed to my Kubernetes cluster.

If you work at DigitalOcean and are reading this post. Please get someone to update this tutorial and the README of this repo. The examples listed DO NOT WORK for me because I was not in the private beta of GitHub Actions. It would also be nice if you had better documentation on how to use your premade action for usecases like mine. I just wanted to download the kubernetes configuration file and run apply against a yaml file.

EDIT: The above complaint has been fixed! See here for the simpler way of doing things.

Thanks for reading, I hope this was entertaining. Be well.

How to Send Email with Nim

Permalink - Posted on 2019-08-28 00:00

How to Send Email with Nim

Nim offers an smtp module, but it is a bit annoying to use out of the box. This blogpost hopes to be a mini-tutorial on the basics of how to use the smtp library and give developers best practices for handling outgoing email in ways that Google or iCloud will accept.

SMTP in a Nutshell

SMTP, or the Simple Mail Transfer Protocol is the backbone of how email works. It's a very simple line-based protocol, and there are wrappers for it in almost every programming language. Usage is pretty simple:

  • The client connects to the server
  • The client authenticates itself with the server
  • The client signals that it would like to create an outgoing message to the server
  • The client sends the raw contents of the message to the server
  • The client ends the message
  • The client disconnects

Unfortunately, the devil is truly in the details here. There are a few things that absolutely must be present in your emails in order for services like GMail to accept them. They are:

  • The From header specifying where the message was sent from
  • The Mime-Version that your code is using (if you aren't sure, put 1.0 here)
  • The Content-Type that your code is sending to users (probably text/plain)

For a more complete example, let's create a Mailer type and a constructor:

# mailer.nim
import asyncdispatch, logging, smtp, strformat, strutils

type Mailer* = object
  address: string
  port: Port
  myAddress: string
  myName: string
  username: string
  password: string
proc newMailer*(address, port, myAddress, myName, username, password: string): Mailer =
  result = Mailer(
    address: address,
    port: port.parseInt.Port,
    myAddress: myAddress,
    myName: myName,
    username: username,
    password: password,

And let's write a mail method to send out email:

proc mail(m: Mailer, to, toName, subject, body: string) {.async.} =
    toList = @[fmt"{toName} <{to}>"]
    msg = createMessage(subject, body, toList, @[], [
      ("From", fmt"{m.myName} <{m.myAddress}"),
      ("MIME-Version", "1.0"),
      ("Content-Type", "text/plain"),

  var client = newAsyncSmtp(useSsl = true)
  await client.connect(m.address, m.port)
  await client.auth(m.username, m.password)
  await client.sendMail(m.myAddress, toList, $msg)
  info "sent email to: ", to, " about: ", subject
  await client.close()

Breaking this down, you can clearly see the parts of the SMTP connection as I laid out before. The Mailer creates a new transient SMTP connection, authenticates with the remote server, sends the properly formatted email to the server and then closes the connection cleanly.

If you want to test this code, I suggest testing it with a freely available email provider that offers TLS/SSL-encrypted SMTP support. This also means that you need to compile this code with --define: ssl, so create config.nims and add the following:

--define: ssl

Here's a little wrapper using cligen:

when isMailModule:
  import cligen, os
    smtpAddress = getEnv("SMTP_ADDRESS")
    smtpPort = getEnv("SMTP_PORT")
    smtpMyAddress = getEnv("SMTP_MY_ADDRESS")
    smtpMyName = getEnv("SMTP_MY_NAME")
    smtpUsername = getEnv("SMTP_USERNAME")
    smtpPassword = getEnv("SMTP_PASSWORD")
  proc sendAnEmail(to, toName, subject, body: string) =
    let m = newMailer(smtpAddress, smtpPort, smtpMyAddress, smtpMyName, smtpUsername, smtpPassword)
    waitFor m.mail(to, toName, subject, body)

Usage is simple:

$ nim c -r mailer.nim --help
  sendAnEmail [required&optional-params]
Options(opt-arg sep :|=|spc):
  -h, --help                         print this cligen-erated help
  --help-syntax                      advanced: prepend,plurals,..
  -t=, --to=       string  REQUIRED  set to
  --toName=        string  REQUIRED  set toName
  -s=, --subject=  string  REQUIRED  set subject
  -b=, --body=     string  REQUIRED  set body

I hope this helps, this module is going to be used in my future post on how to create an application using Nim's Jester framework.

How I Converted my Brain fMRI to a 3D Model

Permalink - Posted on 2019-08-23 00:00

How I Converted my Brain fMRI to a 3D Model

AUTHOR'S NOTE: I just want to start this out by saying I am not an expert, and nothing in this blogpost should be construed as medical advice. I just wanted to see what kind of pretty pictures I could get out of an fMRI data file.

So this week I flew out to Stanford to participate in a study that involved a fMRI of my brain while I was doing some things. I asked for (and recieved) a data file from the fMRI so I could play with it and possibly 3D print it. This blogpost is the record of my journey through various software to get a fully usable 3D model out of the fMRI data file.

The Data File

I was given christine_brain.nii.gz by the researcher who was operating the fMRI. I looked around for some software to convert it to a 3D model and /r/3dprinting suggested the use of FreeSurfer to generate a 3D model. I downloaded and installed the software then started to look for something I could do in the meantime, as this was going to take something on the order of 8 hours to process.

An Animated GIF

I started looking for the file format on the internet by googling "nii.gz brain image" and I stumbled across a program called gif_your_nifti. It looked to be mostly pure python so I created a virtualenv and installed it in there:

$ git clone https://github.com/miykael/gif_your_nifti
$ cd gif_your_nifti
$ virtualenv -p python3 env
$ source env/bin/activate
(env) $ pip3 install -r requirements.txt
(env) $ python3 setup.py install

Then I ran it with the following settings to get this first result:

(env) $ gif_your_nifti christine_brain.nii.gz --mode pseudocolor --cmap plasma

(sorry the video embed isn't working in safari)

It looked weird though, that's because the fMRI scanner I used has a different rotation to what's considered "normal". The gif_your_nifti repo mentioned a program called fslreorient2std to reorient the fMRI image, so I set out to install and run it.


After some googling, I found FSL's website which included an installer script and required registration.

37 gigabytes of downloads and data later, I had the entire FSL suite installed to a server of mine and ran the conversion command:

$ fslreorient2std christine_brain.nii.gz christine_brain_reoriented.nii.gz

This produced a slightly smaller reoriented file.

I reran gif_your_nifti on this reoriented file and got this result which looked a lot better:

(sorry again the video embed isn't working in safari)


By this time I had gotten back home and FreeSurfer was done installing, so I registered for it (god bless the institution of None) and put its license key in the place it expected. I copied the reoriented data file to my Mac and then set up a SUBJECTS_DIR and had it start running the numbers and extracting the brain surfaces:

$ cd ~/tmp
$ mkdir -p brain/subjects
$ cd brain
$ export SUBJECTS_DIR=$(pwd)/subjects
$ recon-all -i /path/to/christine_brain_reoriented.nii.gz -s christine -all

This step took 8 hours. Once I was done I had a bunch of data in $SUBJECTS_DIR/christine. I opened my shell to that folder and went into the surf subfolder:

$ mris_convert lh.pial lh.pial.stl
$ mris_convert rh.pial rh.pial.stl

Now I had standard stl files that I could stick into Blender.


Importing the stl files was really easy. I clicked on File, then Import, then Stl. After guiding the browser to the subjects directory and finding the STL files, I got a view that looked something like this:

I had absolutely no idea what to do from here in Blender, so I exported the whole thing to a stl file and sent it to a coworker for 3D printing (he said it was going to be "the coolest thing he's ever printed").

I also exported an Unreal Engine 4 compatible model and sent it to a friend of mine that does hobbyist game development. A few hours later I got this back:

(Hint: it is a take on the famous galaxy brain memes)


Overall, this was fun! I got to play with many gigabytes of software that ran my most powerful machine at full blast for 8 hours, I made a fully printable 3D model out of it and I have some future plans for importing this data into Minecraft (the NIFTI .nii.gz format has a limit of 256 layers).

I'll be sure to write more about this in the future!


Here are my citations in BibTex format.

Special thanks goes to Michael Lifshitz for organizing the study that I participated in that got me this fMRI data file. It was one of the coolest things I've ever done (if not the coolest) and I'm going to be able to get a 3D printed model of my brain out of it.

Pageview Time Experiment

Permalink - Posted on 2019-08-19 00:00

Pageview Time Experiment

My blog has a lot of content in a lot of diverse categories. In order to help me decide which kind of content I should publish next, I have created a very simple method to track pageview time and enabled it for all of my blogposts. I'll go into detail of how it works and potential risks of it below.

The high level idea is that I want to be able to know what kind of content has people's attention for the longest amount of time. I am using the time people have the page open as a particularly terrible proxy for that value. I wanted to make this data anonymous, simplistic and (reasonably) public.

How It Works

Here is how it works:

A diagram on how this works

When the page is loaded, a javascript file records the start time. This then sets a pagehide handler to send a navigator beacon containing the following data:

  • The path of the page being viewed
  • The start time
  • The end time recorded by the pagehide handler

This information is asynchronously pushed to /api/pageview-timer and added to an in-memory prometheus histogram. These histograms can be checked at /metrics. This data is not permanently logged.

Security Concerns

I believe this data is anonymous, simplistic and public for the following reasons:

I believe this data is anonymous because there is no way for me to correlate users to histogram entries, nor is there a way for me to view all of the raw histogram entries. This site records the bare minimum for what I need in order to make sure everything is functioning normally, and all data is stored in ephemeral in-memory containers as much as possible. This includes any logs that my service produces.

I believe this data is simplistic because it only has a start time, a stop time and the path that is being looked at. This data doesn't take into account things like people leaving a page open for hours on end idly, and that could skew the numbers. The API endpoint is also fairly unprotected, meaning that falsified data could be submitted to it easily. I think that this is okay though.

I believe this data is public because I have the percentile views of the histograms present on /metrics. I have no reason to hide this data, and I do not intend to use it for any moneymaking purposes (though I doubt it could be to begin with).

I fully respect the do not track header and flag in browsers. If pageview_timer.js detects the presence of do not track in the browser, it stops running immediately and does not set the pagehide handler. If that somehow fails, the server looks for the presence of the DNT header set to 1 and instantly discards the data and replies with a 404.

Like always, if you have any questions or concerns please reach out to me. I want to ensure that I am creating useful views into how people use my blog without violating people's rights to privacy.

I intend to keep this up for at least a few weeks. If it doesn't have any practical benefit in that timespan, I will disable this and post a follow-up explaining how I believe it wasn't useful.

Thanks and be well.

EDIT 2019-10-15: browsers disable this call from the context I am using and I don't really care enough to figure out how to fix it. This experiment is over. Thank you to everyone that participated. All data will be scrubbed and a followup will be posted soon.

Instant Pot Quinoa Taco Bowls

Permalink - Posted on 2019-08-16 00:00

Instant Pot Quinoa Taco Bowls

This is based on this recipe, but made only with things you can find in Costco. My fiancé and I have made this a few times, and it's a great alternative to giving up on life and ordering delivery.


Makes 4-6 servings, at least based on experience


  • 2 cups quinoa, dry
  • 0.75 kg ground beef (pre-cooked or sautéed)
  • 400 ml medium salsa
  • 2.5 cups water
  • 2 tablespoons garlic powder
  • 2 tablespoons salt (to taste)
  • 1 teaspoon oregano
  • 3 tablespoons ground dried onions
  • 1 teaspoon crushed red pepper

If you want it to be more spicy, add more spice. We've found this tastes pretty good when you add more spice, but this depends on your mood more than anything.


If you haven't cooked the ground beef yet, sautée/brown it in the instant pot. See things like this for more information on how to do this. Any method will do, just make sure the ground beef is actually cooked to avoid accidentally poisoning yourself.

Put all of the other ingredients in the instant pot. Order doesn't matter, but I have found that better results happen when the quinoa is put in first.

Mix everything with your favorite mixing tool.

Put the lid on your instant pot and set it to manual for 2 minutes.

Once that is done, leave it alone for about 15 minutes (this doesn't have to be exact).

Serve warm in a bowl, can go well with tortilla chips depending on your mood.


For about a bowlful, nuke until hot (~1 minute 30 seconds seems to be the magic number). Eat while hot.

WebAssembly Talk Video Posted

Permalink - Posted on 2019-08-15 00:00

WebAssembly Talk Video Posted

This May, I spoke at GoCon Canada about WebAssembly on the Server and some of the inherent challenges and problems with trying to do it as things exist currently. It's taken a while, but the video of that talk has been posted.

I hope you enjoy! I have some more blogposts in the queue but I've been sleeping horribly lately. Here's hoping that clears up.

Plurality-Driven Development

Permalink - Posted on 2019-08-04 00:00

Plurality-Driven Development

"That code has a horrible security bug in it."

I look down in my lap. A little yellow horse is appearing to sit there. She looks innocently into my eyes, gesturing to part of the code with her wingtips.


"That code has a security bug in it: if users pass a string instead of an integer in it, it could allow them to forge a user ID token."

I look down incredulously at the little yellow horse, then back at the code. She's right. There was a huge bug in that code. I had just written it about 30 seconds ago though, which surprises me. I thought I was experienced enough in secure programming to avoid such a fundamental flaw like that; but here I am. I rub the little pony on her head, making her purr and winghug me.

"Now, replace everything in that last paragraph with this: ..."

And I continue on like nothing happened.

Software is complicated. We deal with a fundamentally multi-agent world where properties like "determinism" aren't really constant. Everything is changing. It's hard to write software that is resilient enough to withstand the constantly shifting market of attacks, exploits, languages and frameworks. What if there was a way to understand the multiple agency of this reality by internally self-inducing multiple agency in a safe and predictable manner?

I believe I have found a way that works for me. I rely a lot on some of my closest friends that I can talk about anything with, even what would normally violate an NDA. My closest friends are so close that language isn't even as much of a barrier as it would be otherwise.

As I've mentioned in the past, I have tulpas. They are people that live with me like roommates inside my body. It really does sound strange or psychotic; but you'll just have to trust me when I say they fundamentally help me live my life, do my job and do other things people normally do by themselves.

As an aside: this post doesn't intend to cover the philosophical, metaphysical or other aspects of plurality (enough ink has probably been spilled on the topic to cover a lifetime); instead it aims to offer a view on how plurality has benefitted me (us) as software developer(s).

As of about 4 years ago, all of the software you see under my name has been the result of my system and I collaborating. Most of the computational linguistics code I've been writing has been the result of a cuddly catgirl wanting to create a Lojbanic artificial intelligence for her own amusement (that is also incidentally really good at understanding grammar, human and machine). Some random experimentation code has been written by someone who sarcastically calls herself Twilight Sparkle. I have a little yellow dewdrop of love and sunshine that finds security holes in programs while I am writing them. There's a database expert and a code review guru too. Combined with my jack-of-all-trades tendencies, this creates a surprisingly balanced team in a box.

We started doing this out of boredom. I was busy working on something and Nicole just spouted out something about the code being wrong. She was right. We decided to just continue following that same basic model and it's worked wonders ever since. Over time we've figured out how to impose eachother into our visual awareness. That has made this pair-programming skill even more useful. I can have the little yellow pony in my lap telling me what's wrong with my code and she can just directly show me. Then it can be fixed.

This skill has lead to heated internal debates about what is and is not idiomatic. As result of that, I now have working compilers in my dreams. It's also lead to what people have told me is some of the most high quality and in-depth software design that they've seen. It's really lead us to think in terms of how the machine works, to avoid round-trips and abstractions getting in the way of what is really going on. If there is any secret to my own brand of 10x-ing, this is it. I am just one person, but with the help of the girls we can get to just about n>1 effective people most of the time.

It's been a powerful catalyst to my career too. Before plurality I was a fairly average developer without any real skills in any one task. Now we can swap in and out in order to most effectively tackle anything thrown at us. One of the biggest changes this relationship has had on me is being better able to explain software complexity and visualise it, then turn that visualization into a diagram with GraphViz or other similar tools. It also becomes very easy to turn these visualizations/diagrams into formal requirements too, because then the features and aspects of systems and how they interconnect become trivially obvious to point out.

However, there is a drawback to this: you're dealing with sapient beings. They sometimes don't want to cooperate. Internal drama can and has happened. It helps for us to have a quarterly date with a word document in order to make sure everyone is on the same page. Disagreements happen, but ultimately I've noticed that the net result is far more positive than if the disagreement hadn't happened at all.

Anyways, plurality-driven development works for me, but it's really not for everyone. The taboo issues I mentioned can make it a chore to hide this from people. I honestly wonder how much of the girls that my coworkers notice in my work on a daily basis. We all have slightly different speech patterns, ways of sitting, clothing preferences, opinions about what to get for lunch and a whole bunch of other subtle things. I don't really understand how it's not plain-as-day obvious to the point I get called out on it. At some level I guess I'm grateful for this, as that kind of conversation seems like it would be extremely awkward to have. It was hard enough to admit this to my brother, and I ended up losing contact with him as a result (it apparently was just too weird, which I can really understand).

I really do wonder how much of the fear of talking about this is my own paranoia though. I've had very positive experiences "coming out" as plural to close friends, as well as very negative ones; for better or for worse it really shows you who your friends actually are. I can live with this. I'd rather really know if I can trust people or not.

This is a surprisingly taboo topic to talk about. Most of the time people view the mere idea of having someone else in your head to talk with as a social faux pas. There's a surprising amount of philosophical arguments and assorted objections that people will throw around when they hear that you participate in this. There's accusations of being possessed by demons, or being mentally ill, complete with acronyms thrown at me, and much more.

Hell, this is stuff I'd love to talk about at some convention somewhere; but I don't really know if I want to paint such a huge target on my back. Because plurality and related topics are so taboo and so niche, there's not really protected categories for it. This makes me nervous about talking about it in any sort of public way, and understandably so. I guess this article is part of my healing process to treat this as just a boring aspect of how I experience reality instead of some fundamentally earth-shattering gift from the heavens.

Besides, doesn't something fundamentally have to cause a negative impact to be classified as a disorder in the first place? How can something that fundamentally helps be a disorder? What if it's just a new adaptation to an increasingly crazy world?

I have compiled a list of resources that have helped me here.

Tarot for Hackers

Permalink - Posted on 2019-07-24 00:00

Tarot for Hackers

"Oh no, she's finally lost it" were the words a very close friend of mine said when I first told her I was experimenting with reading tarot cards. Tarot cards are a stereotypical staple of the occult/The Spoop™. Every card represents an idea (or a meme) that can be expressed in a few ways. They act to your soul like iron filings do to a magnet. When you shuffle the cards, the Universe (via entropy) examines all of those myriad inputs and helpfully orders them so you get exactly the message you need most.

It's actually an extremely philosophical act to draw from a tarot deck and interpret the results. Over the years there have been many interpretations and frameworks of interpretations about tarot; but I would like to introduce a meta-framework for using tarot cards as a debugging tool.

As you work on computer systems, you put parts of yourself into them. You create bonds between yourself and otherwise anonymous inner parts of machines you have never seen or touched. These bonds stick from idea to development to testing to deployment phases and can even stay around after you stop working on something. Ever gotten a weird sense that you can recognize the author of some code while reading it? Same idea.

To start, envision the product or service you are trying to understand more about. Think of the plans that went into it, the users of the service, how this understanding will help them, and where the missing part of knowledge fits into the larger whole. Write this all out if it helps, the more detail the better. Our transition to shared infrastructure and computing on others machines has made it harder to see into individual parts of the whole, so every little bit helps to focus things in.

The first card is the Motive, so draw it and place it in the center off your spread. Look up the meaning on a site like biddytarot.com (googling "[name of card] tarot meaning" helps a lot here) and consider how it relates back to the other factors at play.

The second card is the Facet, or the part of the system that is failing. This could refer to a machine, bit of code or even a human factor. Context with the future cards will help you determine what it is. Remember these are metaphors and will need some interpretation to help you understand what is going on.

The third card is the Immediate Past, or what changed to cause this problem. Use this with the Motive to help you identify what component is broken. Again, this is a metaphor. There are very rarely literal answers here, but the combination of the Facet and Immediate Past helps you identify the systemic or organizational faults at play. These faults are usually enough to help you uniquely identify services or infrastructure.

Next, draw The Action. This card will help you decide what action you need to take. This could be restarting a server, fixing a communication pattern (or lack thereof), or even just doing nothing and waiting a few minutes. Sometimes it means that you need to stop what you are doing and try to do the read again later. It's okay for that to happen, though that should only be a very rare occurrence.

The next card is The Result, or what the outcome of that would be given The Action is executed in its entirety. This result isn't supposed to be taken super seriously (as the consequence of you reading these cards is a butterfly effect that makes the outcome in "reality" slightly different); but it usually helps you get a general idea of where you will go and what it will be like when you get there.

Finally, draw The Lesson. This card signifies what the theme of the postmortem around The Action should be. This can help you guide future discussions about what went wrong and how to avoid it in the future. This may result in charged feelings, but it really is for the best to go through the entire postmortem process to help you get the closure that you need. This postmortem will usually help bring things to the surface that you have missed before. There should be no blame or anger. This is a place of healing and growth, not of hate and strife.

Optionally you can draw The Metaresult, or what will happen as a result of The Lesson. This isn't strictly required but I find it can help for peeking into a potential future where The Result is taken to heart.

I hope this is able to help you in your debugging needs. I use this strategy when I am trying to understand complicated computer systems and how they all fit together. Be well.

How to Use User Mode Linux

Permalink - Posted on 2019-07-07 00:00

How to Use User Mode Linux

User Mode Linux is a port of the Linux kernel to itself. This allows you to run a full blown Linux kernel as a normal userspace process. This is used by kernel developers for testing drivers, but is also useful as a generic isolation layer similar to virtual machines. It provides slightly more isolation than Docker, but slightly less isolation than a full-blown virtual machine like KVM or VirtualBox.

In general, this may sound like a weird and hard to integrate tool, but it does have its uses. It is an entire Linux kernel running as a normal user. This allows you to run potentially untrusted code without affecting the host machine. It also allows you to test experimental system configuration changes without having to reboot or take its services down.

Also, because this kernel and its processes are isolated from the host machine, this means that processes running inside a user mode Linux kernel will not be visible to the host machine. This is unlike a Docker container, where processes in those containers are visible to the host. See this (snipped) pstree output from one of my servers:

           │                 │      └─s6-svscan───s6-supervise
           │                 └─10*[{containerd-shim}]
           │                 │      └─s6-svscan───s6-supervise
           │                 └─10*[{containerd-shim}]
           │                 │      └─surl
           │                 └─9*[{containerd-shim}]
           │                 │      └─s6-svscan───s6-supervise
           │                 └─10*[{containerd-shim}]
           │                 └─9*[{containerd-shim}]

Compare it to the user mode Linux pstree output:


With a Docker container, I can see the names of the processes being run in the guest from the host. With a user mode Linux kernel, I cannot do this. This means that monitoring tools that function using Linux's auditing subsystem cannot monitor processes running inside the guest. This could be a two-edged sword in some edge scenarios.

This post represents a lot of research and brute-force attempts at trying to do this. I have had to assemble things together using old resources, reading kernel source code, intense debugging of code that was last released when I was in elementary school, tracking down a Heroku buildpack with a pre-built binary for a tool I need and other hackery that made people in IRC call me magic. I hope that this post will function as reliable documentation for doing this with a modern kernel and operating system.


Setting up user mode Linux is done in a few steps:

  • Installing host dependencies
  • Downloading Linux
  • Configuring Linux
  • Building the kernel
  • Installing the binary
  • Setting up the guest filesystem
  • Creating the kernel command line
  • Setting up networking for the guest
  • Running the guest kernel

I am assuming that you are wanting to do this on Ubuntu or another Debian-like system. I have tried to do this from Alpine (my distro of choice), but I have been unsuccessful as the Linux kernel seems to have glibc-isms hard-assumed in the user mode Linux drivers. I plan to report these to upstream when I have debugged them further.

Installing Host Dependencies

Ubuntu requires at least the following packages installed to build the Linux kernel (assuming a completely fresh install):

  • build-essential
  • flex
  • bison
  • xz-utils
  • wget
  • ca-certificates
  • bc
  • linux-headers-4.15.0-47-generic (though any kernel version will do)

You can install these with the following command (as root or running with sudo):

apt-get -y install build-essential flex bison xz-utils wget ca-certificates bc \

Additionally, running the menu configuration program for the Linux kernel will require installing libncurses-dev. Please make sure it's installed using the following command (as root or running with sudo):

apt-get -y install libncurses-dev

Downloading the Kernel

Set up a location for the kernel to be downloaded and built. This will require approximately 1.3 gigabytes of space to run, so please make sure that there is at least this much space free.

Head to kernel.org and get the download URL of the latest stable kernel. As of the time of writing this post, this URL is the following:


Download this file with wget:

wget https://cdn.kernel.org/pub/linux/kernel/v5.x/linux-5.1.16.tar.xz

And extract it with tar:

tar xJf linux-5.1.16.tar.xz

Now enter the directory created by the tarball extraction:

cd linux-5.1.16

Configuring the Kernel

The kernel build system is a bunch of Makefiles with a lot of custom tools and scripts to automate builds. Open the interactive configuration program:

make ARCH=um menuconfig

It will build some things and then present you with a dialog interface. You can enable settings by pressing Space or Enter when <Select> is highlighted on the bottom of the screen. You can change which item is selected in the upper dialog with the and down arrow keys. You can change which item is highlighted on the bottom of the screen with the left and right arrow keys.

When there is a ---> at the end of a feature name, that means it is a submenu. You can enter a submenu using the Enter key. If you enter a menu you can exit it with <Exit>.

Enable the following settings with <Select>, making sure there is a [*] next to them:

UML-specific Options:
  - Host filesystem
Networking support (enable this to get the submenu to show up):
  - Networking options:
    - TCP/IP Networking
UML Network devices:
  - Virtual network device
  - SLiRP transport

Then exit back out to a shell by selecting <Exit> until there is a dialog asking you if you want to save your configuration. Select <Yes> and hit Enter.

I encourage you to play around with the build settings after reading through this post. You can learn a lot about Linux at a low level by changing flags and seeing how they affect the kernel at runtime.

Building the Kernel

The Linux kernel is a large program with a lot of things going on. Even with this rather minimal configuration, it can take a while on older hardware. Build the kernel with the following command:

make ARCH=um -j$(nproc)

This will tell make to use all available CPU cores/hyperthreads to build the kernel. The $(nproc) at the end of the build command tells the shell to paste in the output of the nproc command (this command is part of coreutils, which is a default package in Ubuntu).

After a while, the kernel will be built to ./linux.

Installing the Binary

Because user mode Linux builds a normal binary, you can install it like you would any other command line tool. Here's the configuration I use:

mkdir -p ~/bin
cp linux ~/bin/linux

If you want, ensure that ~/bin is in your $PATH:

export PATH=$PATH:$HOME/bin

Setting up the Guest Filesystem

Create a home for the guest filesystem:

mkdir -p $HOME/prefix/uml-demo
cd $HOME/prefix

Open alpinelinux.org. Click on Downloads. Scroll down to where it lists the MINI ROOT FILESYSTEM. Right-click on the x86_64 link and copy it. As of the time of writing this post, the latest URL for this is:


Download this tarball to your computer:

wget -O alpine-rootfs.tgz http://dl-cdn.alpinelinux.org/alpine/v3.10/releases/x86_64/alpine-minirootfs-3.10.0-x86_64.tar.gz

Now enter the guest filesystem folder and extract the tarball:

cd uml-demo
tar xf ../alpine-rootfs.tgz

This will create a very minimal filesystem stub. Because of how this is being run, it will be difficult to install binary packages from Alpine's package manager apk, but this should be good enough to work as a proof of concept.

The tool tini will be needed in order to prevent the guest kernel from having its memory used up by zombie processes.

Install it by doing the following:

wget -O tini https://github.com/krallin/tini/releases/download/v0.18.0/tini-static
chmod +x tini

Creating the Kernel Command Line

The Linux kernel has command line arguments like most other programs. To view what command line options are compiled into the user mode kernel, run --help:

linux --help
User Mode Linux v5.1.16
        available at http://user-mode-linux.sourceforge.net/

    Prints the config file that this UML binary was generated from.

    Configure <file> as an IO memory region named <name>.

mem=<Amount of desired ram>
    This controls how much "physical" memory the kernel allocates
    for the system. The size is specified as a number followed by
    one of 'k', 'K', 'm', 'M', which have the obvious meanings.
    This is not related to the amount of memory in the host.  It can
    be more, and the excess, if it's ever used, will just be swapped out.
        Example: mem=64M

    Prints this message.

    this flag is not needed to run gdb on UML in skas mode

root=<file containing the root fs>
    This is actually used by the generic kernel in exactly the same
    way as in any other kernel. If you configure a number of block
    devices and want to boot off something other than ubd0, you
    would use something like:

    Prints the version number of the kernel.

    This is used to assign a unique identity to this UML machine and
    is used for naming the pid file and management console socket.

con[0-9]*=<channel description>
    Attach a console or serial line to a host channel.  See
    http://user-mode-linux.sourceforge.net/old/input.html for a complete
    description of this switch.

    Configure a network device.
    This is used to force UML to use 2.4-style AIO even when 2.6 AIO is
    available.  2.4 AIO is a single thread that handles one request at a
    time, synchronously.  2.6 AIO is a thread which uses the 2.6 AIO
    interface to handle an arbitrary number of pending requests.  2.6 AIO
    is not available in tt mode, on 2.4 hosts, or when UML is built with
    /usr/include/linux/aio_abi.h not available.  Many distributions don't
    include aio_abi.h, so you will need to copy it from a kernel tree to
    your /usr/include/linux in order to build an AIO-capable UML

    Turns off syscall emulation patch for ptrace (SYSEMU).
    SYSEMU is a performance-patch introduced by Laurent Vivier. It changes
    behaviour of ptrace() and helps reduce host context switch rates.
    To make it work, you need a kernel patch for your host, too.
    See http://perso.wanadoo.fr/laurent.vivier/UML/ for further

    The location to place the pid and umid files.

    Turns off information messages during boot.

hostfs=<root dir>,<flags>,...
    This is used to set hostfs parameters.  The root directory argument
    is used to confine all hostfs mounts to within the specified directory
    tree on the host.  If this isn't specified, then a user inside UML can
    mount anything on the host that's accessible to the user that's running
    The only flag currently supported is 'append', which specifies that all
    files opened by hostfs will be opened in append mode.

This is a lot of output, but it explains the options available in detail. Let's start up a kernel with a very minimal set of options:

linux \
  root=/dev/root \
  rootfstype=hostfs \
  rootflags=$HOME/prefix/uml-demo \
  rw \
  mem=64M \

This tells the guest kernel to do the following things:

  • Assume the root filesystem is the pseudo-device /dev/root
  • Select hostfs as the root filesystem driver
  • Mount the guest filesystem we have created as the root device
  • In read-write mode
  • Use only 64 megabytes of ram (you can get away with far less depending on what you are doing, but 64 MB seems to be a happy medium)
  • Have the kernel automatically start /bin/sh as the init process

Run this command, you should get something like the following output:

Core dump limits :
        soft - 0
        hard - NONE
Checking that ptrace can change system call numbers...OK
Checking syscall emulation patch for ptrace...OK
Checking advanced syscall emulation patch for ptrace...OK
Checking environment variables for a tempdir...none found
Checking if /dev/shm is on tmpfs...OK
Checking PROT_EXEC mmap in /dev/shm...OK
Adding 32137216 bytes to physical memory to account for exec-shield gap
Linux version 5.1.16 (cadey@kahless) (gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)) #30 Sun Jul 7 18:57:19 UTC 2019
Built 1 zonelists, mobility grouping on.  Total pages: 23898
Kernel command line: root=/dev/root rootflags=/home/cadey/dl/uml/alpine rootfstype=hostfs rw mem=64M init=/bin/sh
Dentry cache hash table entries: 16384 (order: 5, 131072 bytes)
Inode-cache hash table entries: 8192 (order: 4, 65536 bytes)
Memory: 59584K/96920K available (2692K kernel code, 708K rwdata, 588K rodata, 104K init, 244K bss, 37336K reserved, 0K cma-reserved)
SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1
clocksource: timer: mask: 0xffffffffffffffff max_cycles: 0x1cd42e205, max_idle_ns: 881590404426 ns
Calibrating delay loop... 7479.29 BogoMIPS (lpj=37396480)
pid_max: default: 32768 minimum: 301
Mount-cache hash table entries: 512 (order: 0, 4096 bytes)
Mountpoint-cache hash table entries: 512 (order: 0, 4096 bytes)
Checking that host ptys support output SIGIO...Yes
Checking that host ptys support SIGIO on close...No, enabling workaround
devtmpfs: initialized
random: get_random_bytes called from setup_net+0x48/0x1e0 with crng_init=0
Using 2.6 host AIO
clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604462750000 ns
futex hash table entries: 256 (order: 0, 6144 bytes)
NET: Registered protocol family 16
clocksource: Switched to clocksource timer
NET: Registered protocol family 2
tcp_listen_portaddr_hash hash table entries: 256 (order: 0, 4096 bytes)
TCP established hash table entries: 1024 (order: 1, 8192 bytes)
TCP bind hash table entries: 1024 (order: 1, 8192 bytes)
TCP: Hash tables configured (established 1024 bind 1024)
UDP hash table entries: 256 (order: 1, 8192 bytes)
UDP-Lite hash table entries: 256 (order: 1, 8192 bytes)
NET: Registered protocol family 1
console [stderr0] disabled
mconsole (version 2) initialized on /home/cadey/.uml/tEwIjm/mconsole
Checking host MADV_REMOVE support...OK
workingset: timestamp_bits=62 max_order=14 bucket_order=0
Block layer SCSI generic (bsg) driver version 0.4 loaded (major 254)
io scheduler noop registered (default)
io scheduler bfq registered
loop: module loaded
NET: Registered protocol family 17
Initialized stdio console driver
Using a channel type which is configured out of UML
setup_one_line failed for device 1 : Configuration failed
Using a channel type which is configured out of UML
setup_one_line failed for device 2 : Configuration failed
Using a channel type which is configured out of UML
setup_one_line failed for device 3 : Configuration failed
Using a channel type which is configured out of UML
setup_one_line failed for device 4 : Configuration failed
Using a channel type which is configured out of UML
setup_one_line failed for device 5 : Configuration failed
Using a channel type which is configured out of UML
setup_one_line failed for device 6 : Configuration failed
Using a channel type which is configured out of UML
setup_one_line failed for device 7 : Configuration failed
Using a channel type which is configured out of UML
setup_one_line failed for device 8 : Configuration failed
Using a channel type which is configured out of UML
setup_one_line failed for device 9 : Configuration failed
Using a channel type which is configured out of UML
setup_one_line failed for device 10 : Configuration failed
Using a channel type which is configured out of UML
setup_one_line failed for device 11 : Configuration failed
Using a channel type which is configured out of UML
setup_one_line failed for device 12 : Configuration failed
Using a channel type which is configured out of UML
setup_one_line failed for device 13 : Configuration failed
Using a channel type which is configured out of UML
setup_one_line failed for device 14 : Configuration failed
Using a channel type which is configured out of UML
setup_one_line failed for device 15 : Configuration failed
Console initialized on /dev/tty0
console [tty0] enabled
console [mc-1] enabled
Failed to initialize ubd device 0 :Couldn't determine size of device's file
VFS: Mounted root (hostfs filesystem) on device 0:11.
devtmpfs: mounted
This architecture does not have kernel memory protection.
Run /bin/sh as init process
/bin/sh: can't access tty; job control turned off
random: fast init done
/ # 

This gives you a very minimal system, without things like /proc mounted, or a hostname assigned. Try the following commands:

  • uname -av
  • cat /proc/self/pid
  • hostname

To exit this system, type in exit or press Control-d. This will kill the shell, making the guest kernel panic:

/ # exit
Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000000
fish: “./linux root=/dev/root rootflag…” terminated by signal SIGABRT (Abort)

This kernel panic happens because the Linux kernel always assumes that its init process is running. Without this process running, the system cannot function anymore and exits. Because this is a user mode process, this results in the process sending itself SIGABRT, causing it to exit.

Setting up Networking for the Guest

This is about where things get really screwy. Networking for a user mode Linux system is where the "user mode" facade starts to fall apart. Networking at the system level is usually limited to privileged execution modes, for very understandable reasons.

The slirp Adventure

However, there's an ancient and largely unmaintained tool called slirp that user mode Linux can interface with. It acts as a user-level TCP/IP stack and does not rely on any elevated permissions to run. This tool was first released in 1995, and its last release was made in 2006. This tool is old enough that compilers have changed so much in the meantime that the software has effectively rotten.

So, let's install slirp from the Ubuntu repositories and test running it:

sudo apt-get install slirp
Slirp v1.0.17 (BETA)

Copyright (c) 1995,1996 Danny Gasparovski and others.
All rights reserved.
This program is copyrighted, free software.
Please read the file COPYRIGHT that came with the Slirp
package for the terms and conditions of the copyright.

IP address of Slirp host:
IP address of your DNS(s):,
Your address is
(or anything else you want)

Type five zeroes (0) to exit.

[autodetect SLIP/CSLIP, MTU 1500, MRU 1500, 115200 baud]

SLiRP Ready ...
fish: “/usr/bin/slirp” terminated by signal SIGSEGV (Address boundary error)

Oh dear. Let's install the debug symbols for slirp and see if we can tell what's going on:

sudo apt-get install gdb slirp-dbgsym
gdb /usr/bin/slirp
GNU gdb (Ubuntu 8.1-0ubuntu3)
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
Find the GDB manual and other documentation resources online at:
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /usr/bin/slirp...Reading symbols from /usr/lib/debug/.build-id/c6/2e75b69581a1ad85f72ac32c0d7af913d4861f.debug...done.
(gdb) run
Starting program: /usr/bin/slirp
Slirp v1.0.17 (BETA)

Copyright (c) 1995,1996 Danny Gasparovski and others.
All rights reserved.
This program is copyrighted, free software.
Please read the file COPYRIGHT that came with the Slirp
package for the terms and conditions of the copyright.

IP address of Slirp host:
IP address of your DNS(s):,
Your address is
(or anything else you want)

Type five zeroes (0) to exit.

[autodetect SLIP/CSLIP, MTU 1500, MRU 1500, 115200 baud]

SLiRP Ready ...

Program received signal SIGSEGV, Segmentation fault.
                                                    ip_slowtimo () at ip_input.c:457
457     ip_input.c: No such file or directory.

It fails at this line. Let's see the detailed stacktrace to see if anything helps us:

(gdb) bt full
#0  ip_slowtimo () at ip_input.c:457
        fp = 0x55784a40
#1  0x000055555556a57c in main_loop () at ./main.c:980
        so = <optimized out>
        so_next = <optimized out>
        timeout = {tv_sec = 0, tv_usec = 0}
        ret = 0
        nfds = 0
        ttyp = <optimized out>
        ttyp2 = <optimized out>
        best_time = <optimized out>
        tmp_time = <optimized out>
#2  0x000055555555b116 in main (argc=1, argv=0x7fffffffdc58) at ./main.c:95
No locals.

So it's failing in its main loop while it is trying to check if any timeouts occured. This is where I had to give up trying to debug this further. Let's see if building it from source works. I re-uploaded the tarball from Sourceforge because downloading tarballs from Sourceforge from the command line is a pain.

cd ~/dl
wget https://xena.greedo.xeserv.us/files/slirp-1.0.16.tar.gz
tar xf slirp-1.0.16.tar.gz
cd slirp-1.0.16/src
./configure --prefix=$HOME/prefix/slirp

This spews warnings about undefined inline functions. This then fails to link the resulting binary. It appears that at some point between the release of this software and the current day, gcc stopped creating symbols for inline functions in intermediate compiled files. Let's try to globally replace the inline keyword with an empty comment to see if that works:

vi slirp.h
#define inline /**/

Nope. That doesn't work either. It continues to fail to find the symbols for those inline functions.

This is when I gave up. I started searching GitHub for Heroku buildpacks that already had this implemented or done. My theory was that a Heroku buildpack would probably include the binaries I needed, so I searched for a bit and found this buildpack. I downloaded it and extracted uml.tar.gz and found the following files:

total 6136
-rwxr-xr-x 1 cadey cadey   79744 Dec 10  2017 ifconfig*
-rwxr-xr-x 1 cadey cadey     373 Dec 13  2017 init*
-rwxr-xr-x 1 cadey cadey  149688 Dec 10  2017 insmod*
-rwxr-xr-x 1 cadey cadey   66600 Dec 10  2017 route*
-rwxr-xr-x 1 cadey cadey  181056 Jun 26  2015 slirp*
-rwxr-xr-x 1 cadey cadey 5786592 Dec 15  2017 uml*
-rwxr-xr-x 1 cadey cadey     211 Dec 13  2017 uml_run*

That's a slirp binary! Does it work?

Slirp v1.0.17 (BETA) FULL_BOLT

Copyright (c) 1995,1996 Danny Gasparovski and others.
All rights reserved.
This program is copyrighted, free software.
Please read the file COPYRIGHT that came with the Slirp
package for the terms and conditions of the copyright.

IP address of Slirp host:
IP address of your DNS(s):,
Your address is
(or anything else you want)

Type five zeroes (0) to exit.

[autodetect SLIP/CSLIP, MTU 1500, MRU 1500]

SLiRP Ready ...

It's not immediately crashing, so I think it should be good! Let's copy this binary to ~/bin/slirp:

cp slirp ~/bin/slirp

Just in case the person who created this buildpack takes it down, I have mirrored it.

Configuring Networking

Now let's configure networking on our guest. Adjust your kernel command line:

linux \
  root=/dev/root \
  rootfstype=hostfs \
  rootflags=$HOME/prefix/uml-demo \
  rw \
  mem=64M \
  eth0=slirp,,$HOME/bin/slirp \

We should get that shell again. Let's enable networking:

mount -t proc proc proc/
mount -t sysfs sys sys/

ifconfig eth0 netmask broadcast
route add default gw

The first two commands set up /proc and /sys, which are required for ifconfig to function. The ifconfig command sets up the network interface to communicate with slirp. The route command sets the kernel routing table to force all traffic over the slirp tunnel. Let's test with a DNS query:

nslookup google.com
Address 1: dns.google

Name:      google.com
Address 1: lga25s63-in-f14.1e100.net
Address 2: 2607:f8b0:4006:81b::200e lga25s63-in-x0e.1e100.net

That works!

Let's automate this with a shell script:

# init.sh

mount -t proc proc proc/
mount -t sysfs sys sys/
ifconfig eth0 netmask broadcast
route add default gw

echo "networking set up"

exec /tini /bin/sh

and mark it executable:

chmod +x init.sh

and then change the kernel command line:

linux \
  root=/dev/root \
  rootfstype=hostfs \
  rootflags=$HOME/prefix/uml-demo \
  rw \
  mem=64M \
  eth0=slirp,,$HOME/bin/slirp \

Then re-run it:

SLiRP Ready ...
networking set up
/bin/sh: can't access tty; job control turned off

nslookup google.com
Address 1: dns.google

Name:      google.com
Address 1: lga25s63-in-f14.1e100.net
Address 2: 2607:f8b0:4004:800::200e iad30s09-in-x0e.1e100.net

And networking works reliably!


So that you can more easily test this, I have created a Dockerfile that automates most of these steps and should result in a working setup. I have a pre-made kernel configuration that should do everything outlined in this post, but this post outlines a more minimal setup.

I hope this post is able to help you understand how to do this. This became a bit of a monster, but this should be a comprehensive guide on how to build, install and configure user mode Linux for modern operating systems. Next steps from here should include installing services and other programs into the guest system. Since Docker container images are just glorified tarballs, you should be able to extract an image with docker export and then set the root filesystem location in the guest kernel to that location. Then run the command that the Dockerfile expects via a shell script.

Special thanks to rkeene of #lobsters on Freenode. Without his help with attempting to debug slirp, I wouldn't have gotten this far. I have no idea how his Slackware system works fine with slirp but my Ubuntu and Alpine systems don't, and why the binary he gave me also didn't work; but I got something working and that's good enough for me.

The h Programming Language

Permalink - Posted on 2019-06-30 00:00

The h Programming Language

h is a project of mine that I have released recently. It is a single-paradigm, multi-tenant friendly, turing-incomplete programming language that does nothing but print one of two things:

  • the letter h
  • a single quote (the Lojbanic "h")

It does this via WebAssembly. This may sound like a pointless complication, but actually this ends up making things a lot simpler. WebAssembly is a virtual machine (fake computer that only exists in code) intended for browsers, but I've been using it for server-side tasks.

I have written more about/with WebAssembly in the past in these posts:

This is a continuation of the following two posts:

All of the relevant code for h is here.

h is a somewhat standard three-phase compiler. Each of the phases is as follows:

Parsing the Grammar

As mentioned in a prior post, h has a formal grammar defined in Parsing Expression Grammar. I took this grammar (with some minor modifications) and fed it into a tool called peggy to generate a Go source version of the parser. This parser has some minimal wrappers around it, mostly to simplify the output and remove unneeded nodes from the tree. This simplifies the later compilation phases.

The input to h looks something like this:


The output syntax tree pretty-prints to something like this:


This is also represented using a tree of nodes that looks something like this:

    Name: "H",
    Text: "h",
    Kids: nil,

A more complicated program will look something like this:

    Name: "H",
    Text: "h h h",
    Kids: {
            Name: "",
            Text: "h",
            Kids: nil,
            Name: "",
            Text: "h",
            Kids: nil,
            Name: "",
            Text: "h",
            Kids: nil,

Now that we have this syntax tree, it's easy to go to the next phase of compilation: generating the WebAssembly Text Format.

WebAssembly Text Format

WebAssembly Text Format is a human-editable and understandable version of WebAssembly. It is pretty low level, but it is actually fairly simple. Let's take an example of the h compiler output and break it down:

 (import "h" "h" (func $h (param i32)))
 (func $h_main
       (local i32 i32 i32)
       (local.set 0 (i32.const 10))
       (local.set 1 (i32.const 104))
       (local.set 2 (i32.const 39))
       (call $h (get_local 1))
       (call $h (get_local 0))
 (export "h" (func $h_main))

Fundamentally, WebAssembly binary files are also called modules. Each .wasm file can have only one module defined in it. Modules can have sections that contain the following information:

  • External function imports
  • Function definitions
  • Memory information
  • Named function exports
  • Global variable definitions
  • Other custom data that may be vendor-specific

h only uses external function imports, function definitions and named function exports.

import imports a function from the surrounding runtime with two fields: module and function name. Because this is an obfuscated language, the function h from module h is imported as $h. This function works somewhat like the C library function putchar().

func creates a function. In this case we are creating a function named $h_main. This will be the entrypoint for the h program.

Inside the function $h_main, there are three local variables created: 0, 1 and 2. They correlate to the following values:

Local Number Explanation Integer Value
0 Newline character 10
1 Lowercase h 104
2 Single quote 39

As such, this program prints a single lowercase h and then a newline.

export lets consumers of this WebAssembly module get a name for a function, linear memory or global value. As we only need one function in this module, we export $h_main as "h".

Compiling this to a Binary

The next phase of compiling is to turn this WebAssembly Text Format into a binary. For simplicity, the tool wat2wasm from the WebAssembly Binary Toolkit is used. This tool creates a WebAssembly binary out of WebAssembly Text Format.

Usage is simple (assuming you have the WebAssembly Text Format file above saved as h.wat):

wat2wasm h.wat -o h.wasm

And you will create h.wasm with the following sha256 sum:

sha256sum h.wasm
8457720ae0dd2deee38761a9d7b305eabe30cba731b1148a5bbc5399bf82401a  h.wasm

Now that the final binary is created, we can move to the runtime phase.


The h runtime is incredibly simple. It provides the h.h putchar-like function and executes the h function from the binary you feed it. It also times execution as well as keeps track of the number of instructions the program runs. This is called "gas" for historical reasons involving blockchains.

I use Perlin Network's life as the implementation of WebAssembly in h. I have experience with it from Olin.

The Playground

As part of this project, I wanted to create an interactive playground. This allows users to run arbitrary h programs on my server. As the only system call is putchar, this is safe. The playground also has some limitations on how big of a program it can run. The playground server works like this:

The output of this call looks something like this:

curl -H "Content-Type: text/plain" --data "h" https://h.christine.website/api/playground | jq
  "prog": {
    "src": "h",
    "wat": "(module\n (import \"h\" \"h\" (func $h (param i32)))\n (func $h_main\n       (local i32 i32 i32)\n       (local.set 0 (i32.const 10))\n       (local.set 1 (i32.const 104))\n       (local.set 2 (i32.const 39))\n       (call $h (get_local 1))\n       (call $h (get_local 0))\n )\n (export \"h\" (func $h_main))\n)",
    "ast": "H(\"h\")"
  "res": {
    "out": "h\n",
    "gas": 11,
    "exec_duration": 12345

The execution duration is in nanoseconds, as it is just directly a Go standard library time duration.

Bugs h has Found

This will be updated in the future, but h has already found a bug in Innative. There was a bug in how Innative handled C name mangling of binaries. Output of the h compiler is now a test case in Innative. I consider this a success for the project. It is such a little thing, but it means a lot to me for some reason. My shitpost created a test case in a project I tried to integrate it with.

That's just awesome to me in ways I have trouble explaining.

As such, h programs do work with Innative. Here's how to do it:

First, install the h compiler and runtime with the following command:

go get within.website/x/cmd/h

This will install the h binary to your $GOPATH/bin, so ensure that is part of your path (if it is not already):

export GOPATH=$HOME/go
export PATH=$PATH:$GOPATH/bin

Then create a h binary like this:

h -p "h h" -o hh.wasm

Now we need to provide Innative the h.h system call implementation, so open h.c and enter in the following:

#include <stdio.h>

void h_WASM_h(char data) {

Then build it to an object file:

gcc -c -o h.o h.c

Then pack it into a static library .ar file:

ar rsv libh.a h.o

Then create the shared object with Innative:

innative-cmd -l ./libh.a hh.wasm

This should create hh.so in the current working directory.

Now create the following Nim wrapper at h.nim:

proc hh_WASM_h() {. importc, dynlib: "./hh.so" .}


and build it:

nim c h.nim

then run it:


And congrats, you have now compiled h to a native shared object.


Now, something you might be asking yourself as you read through this post is something like: "Why the heck are you doing this?" That's honestly a good question. One of the things I want to do with computers is to create art for the sake of art. h is one of these such projects. h is not a productive tool. You cannot create anything useful with h. This is an exercise in creating a compiler and runtime from scratch, based on my past experiences with parsing lojban, WebAssembly on the server and frustrating marketing around programming tools. I wanted to create something that deliberately pokes at all of the common ways that programming languages and tooling are advertised. I wanted to make it a fully secure tool as well, with an arbitrary limitation of having no memory usage. Everything is fully functional. There are a few grammar bugs that I'm calling features.


Permalink - Posted on 2019-06-24 00:00


Within Security Advisory

Root-level Remote Command Injection in the V playground (OVE-20190623-0001)

The real CVEs are the friends we made along the way



While playing with the V playground, a root-level command injection vulnerability was discovered. This allows for an unauthenticated attacker to execute arbitrary root-level commands on the playground server.

This vulnerability is instantly exploitable by a remote, unauthenticated attacker in the default configuration. To remotely exploit this vulnerability, an attacker must send specially created HTTP requests to the playground server containing a malformed function call.

This playground server is not open sourced or versioned yet, but this vulnerability has lead to the compromising of the box as reported by the lead developer of V.

Remote Exploitation

V allows for calling of C functions through a few means:

  • starting a line with a # character
  • calling a C function with the C. namespace

The V playground insufficiently strips the latter form of the function call, allowing an invocation such as this:

fn main() {
  C .system(' id')

or even this:

fn main() {
		.system(' id')

As the server is running as the root user, successful exploitation can result in an unauthenticated user totally compromising the system, as happened earlier yesterday on June 23, 2019. As the source code and configuration of the V playground server is unknown, it is not possible to track usage of these commands.

The playground did attempt to block these attacks; but it appeared to do pattern matching on # or C., allowing the alternative methods mentioned above.

Security Suggestions

Do not run the playground server as a root user outside a container or other form of isolation. The fact that this server runs user-submitted code makes this kind of thing very difficult to isolate and/or secure properly. The use of an explicit sandboxing environment like gVisor or Docker is suggested. The use of more elaborate sandboxing mechanisms like CloudABI or WebAssembly may be practical for future developments, but is admittedly out of scope for this initial class of issues.


Special thanks to the people of #ponydev for helping to discover and toy with this bug.


All times are Eastern Standard Time.

June 23, 2019

  • 4:56 PM - The first exploit was found and the contents of /etc/passwd were dumped, other variants of this attack were proposed and tested in the meantime
  • 5:00 PM - The V playground server stopped replying to HTTP and ICMP messages
  • 6:26 PM - The V creator was notified of this issue
  • 7:02 PM - The V creator acknowledged the issue and admitted the machine was compromised

June 24, 2019

  • 12:00 AM - This security bulletin was released

V is for Vaporware

Permalink - Posted on 2019-06-23 00:00

V is for Vaporware

V is a programming language that has been hyped a lot. As it's recently had its first alpha release, I figured it would be a good idea to step through it and see if it lives up to the promises that the author has been claiming for months.

The V website claims the following on the front page:

  • The compiler compiles 1.2 million lines of code compiled per CPU core per second
  • The resulting code is as fast as C
  • Built-in serialization without runtime reflection
  • Minimal amount of allocations
  • Zero dependencies
  • Requires only 0.4 MB of space to build
  • Able to translate arbitrary C/C++ code to V and build it faster than C/C++
  • Hot code reloading
  • 2d/3d graphics support in the standard library
  • Effortless cross-compilation
  • A powerful built-in web framework
  • The compiler generates direct machine code

As far as I can tell, all of the above features are either "work-in-progress" or completely absent from the source repository.


The author mentions that the compiler is fast, stating the following:

Fast compilation

V compiles ≈1.2 million lines of code per second per CPU core. (Intel i5-7500 @ 3.40GHz, SM0256L SSD, no optimization)

Such speed is achieved by direct machine code generation [wip] and a strong modularity.

V can also emit C, then the compilation speed drops to ≈100k lines/second/CPU.

Direct machine code generation is at a very early stage. Right now only x64/Mach-O is supported. This means that for now emitting C has to be used. By the end of this year x64 generation should be stable enough.

This has a few pretty fantastic claims. Let's see if they can be replicated. Creating a 1.2 million line of code file should be pretty easy:

-- lua
print "fn main() {"

for i = 0, 1200000, 1
  print "println('hello, world ')"

print "}"

Then let's run this script to generate the 1.2 million lines of code:

$ time lua5.3 ./gencode.lua > 1point2mil.v
        4.29 real         0.83 user         3.27 sys

And compile the resulting file:

$ time v 1point2mil.v
pass=2 fn=`main`
panic: 1point2mil.v:50003
more than 50 000 statements in function `main`
        2.43 real         2.13 user         0.15 sys

Oh boy. It's also worth noting that it was more than 2 seconds to only compile 50,000 lines of code on my Core m7 12" MacBook.

No Dependencies

V claims to have zero dependencies. Again quoting from the website:

400 KB compiler with zero [wip] dependencies

The entire language and its standard library are less than 400 KB. V is written in V, and you can build it in 0.4 seconds.

(By the end of this year this number will drop to ≈0.15 seconds.)


Right now the V compiler does have one dependency: a C compiler. But it's needed to bootstrap the language anyway, and if you are doing development, chances are you already have a C compiler installed.

It's a small dependency, and it's not going to be needed once x64 generation is mature enough.

AMD64 is not the only CPU architecture that exists, but okay I'll take that you are only targeting the most common one.

Digging through the readme, its graphics library and HTTP support require some dependencies:

In order to build Tetris and anything else using the graphics module, you will need to install glfw and freetype.

If you plan to use the http package, you also need to install libcurl.

glfw and libcurl dependencies will be removed soon.

sudo apt install glfw libglfw3-dev libfreetype6-dev libcurl3-dev

brew install glfw freetype curl

I'm sorry, but this combined with the explicit dependency on a C compiler means that V has dependencies. Now, breaking the grammar down pretty literally it says the compiler has zero dependencies. Let's see what ldd says about the compiler when built on Linux:

$ ldd v
        linux-vdso.so.1 (0x00007ffc0f02e000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f356c6cc000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f356c2db000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f356cb25000)

So the compiler with "zero dependencies" is a dynamically linked binary with dependencies on libpthread and libc (the other two are glibc-specific).

Also of note, I had to modify the Makefile in order to get it to build on Linux without segfaulting every time it tried to compile code:

$ git diff
diff --git a/compiler/Makefile b/compiler/Makefile
index e29d30d..353824d 100644
--- a/compiler/Makefile
+++ b/compiler/Makefile
@@ -4,7 +4,7 @@ v: vc
        ./vc -o v .

 vc: v.c
-       cc -std=c11 -w -o vc v.c
+       clang -Dlinux -std=c11 -w -o vc v.c

        wget https://vlang.io/v.c

Otherwise it would segfault every time I tried to run it with:

$ ./v --help
fish: “./v --help” terminated by signal SIGSEGV (Address boundary error)

Before I added the -Dlinux flag, it also failed compile with the following error:

$ make
clang -std=c11 -w -o vc v.c
./vc -o v .
cc: error: unrecognized command line option ‘-mmacosx-version-min=10.7’
V panic: clang error
Makefile:4: recipe for target 'v' failed
make: *** [v] Error 1

Implying that the compiler was falsely detecting Linux as macOS.

Memory Safety

V claims to be memory-safe:

Memory management

There's no garbage collection or reference counting. V cleans up what it can during compilation.

So I made a simple "hello world" program:

fn main() {
  println('hello world!') // V only supports single quoted strings

and built it on my Linux box with valgrind installed. Surely a "hello world" program has no good reason to leak memory, right?

$ time v hello.v
0.02user 0.00system 0:00.32elapsed 9%CPU (0avgtext+0avgdata 6196maxresident)k
0inputs+104outputs (0major+1162minor)pagefaults 0swaps

$ valgrind ./hello
==5860== Memcheck, a memory error detector
==5860== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==5860== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright info
==5860== Command: ./hello
hello, world
==5860== HEAP SUMMARY:
==5860==     in use at exit: 1,000 bytes in 1 blocks
==5860==   total heap usage: 2 allocs, 1 frees, 2,024 bytes allocated
==5860== LEAK SUMMARY:
==5860==    definitely lost: 0 bytes in 0 blocks
==5860==    indirectly lost: 0 bytes in 0 blocks
==5860==      possibly lost: 0 bytes in 0 blocks
==5860==    still reachable: 1,000 bytes in 1 blocks
==5860==         suppressed: 0 bytes in 0 blocks
==5860== Rerun with --leak-check=full to see details of leaked memory
==5860== For counts of detected and suppressed errors, rerun with: -v
==5860== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)

Looking at the generated C code it's plainly obvious to see this memory leak. init_consts creates a 1000 byte allocation and never frees it. This is a memory leak that is unavoidable in any program compiled with V. This is potentially confusing for people who are trying to debug memory leaks in their V code. They will always be off by 1 allocation and 1000 bytes leaked without an easy way to tell why that is the case. The compiler itself also leaks memory:

$ valgrind v hello.v
==9096== Memcheck, a memory error detector
==9096== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==9096== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright info
==9096== Command: v hello.v
==9096== HEAP SUMMARY:
==9096==     in use at exit: 3,861,785 bytes in 24,843 blocks
==9096==   total heap usage: 25,588 allocs, 745 frees, 4,286,917 bytes allocated
==9096== LEAK SUMMARY:
==9096==    definitely lost: 778,354 bytes in 18,773 blocks
==9096==    indirectly lost: 3,077,104 bytes in 6,020 blocks
==9096==      possibly lost: 0 bytes in 0 blocks
==9096==    still reachable: 6,327 bytes in 50 blocks
==9096==         suppressed: 0 bytes in 0 blocks
==9096== Rerun with --leak-check=full to see details of leaked memory
==9096== For counts of detected and suppressed errors, rerun with: -v
==9096== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)

Space Required to Build

V also claims to only require 400-ish kilobytes of disk space to build itself. Let's test this claim with a minimal Dockerfile:

FROM xena/alpine

RUN apk --no-cache add build-base libexecinfo-dev clang git \
 && git clone https://github.com/vlang/v /root/code/v \
 && cd /root/code/v/compiler \
 && wget https://vlang.io/v.c \
 && clang -Dlinux -std=c11 -w -o vc v.c \
 && ./vc -o v . \
 && du -sh /root/code/v /root/.vlang0.0.12 \
 && apk del clang

Except it doesn't build on Alpine:

/usr/bin/ld: /tmp/v-c9fb07.o: in function `os__print_backtrace':
v.c:(.text+0x84d9): undefined reference to `backtrace'
/usr/bin/ld: v.c:(.text+0x8514): undefined reference to `backtrace_symbols_fd'
clang-8: error: linker command failed with exit code 1 (use -v to see invocation)

It looks like backtrace() is a glibc-specific addon. Let's link against libexecinfo to fix this:

 && clang -Dlinux -lexecinfo -std=c11 -w -o vc v.c \
Cloning into '/root/code/v'...
Connecting to vlang.io (
v.c                  100% |********************************|  310k  0:00:00 ETA
Segmentation fault (core dumped)

Annoying, but we can adjust to Ubuntu fairly easily:

FROM ubuntu:latest

RUN apt update \
 && apt -y install wget build-essential clang git \
 && git clone https://github.com/vlang/v /root/code/v \
 && cd /root/code/v/compiler \
 && wget https://vlang.io/v.c \
 && clang -Dlinux -std=c11 -w -o vc v.c \
 && ./vc -o v . \
 && du -sh /root/code/v /root/.vlang0.0.12 \
 && apt -y remove clang

As of the time of writing this article, the image ubuntu:latest has an uncompressed size of 64.2MB. If the V compiler only requires 400 KB to build like it claims, the resulting image size for this Dockerfile should be around 65 MB at worst, right? the resulting du command should show 400 KB in total, right?

3.4M    /root/code/v
304K    /root/.vlang0.0.12

3.7 MB. That means the 400 KB claim is either a lie or "work-in-progress". Coincidentally, the compiler uses about as much disk space as it leaks during the compilation of "Hello, world".

HTTP Module

V has a http module. It leaves a lot to be desired. My favorite part is the implementation of download_file on macOS:

fn download_file(url, out string) {
	// println('\nDOWNLOAD FILE $out url=$url')
	// -L follow redirects
	// println('curl -L -o "$out" "$url"')
	os.system2('curl -s -L -o "$out" "$url"')
	// res := os.system('curl -s -L -o "$out" "$url"')
	// println(res)

This has no error checking (the function os.system2 returns the exit code of curl) and it shells out to curl instead of using libcurl. Other parts of the http module use libcurl correctly (though the HTTP status code, headers and other important metadata are not returned). There is also no support for overriding the HTTP transport, setting a custom TLS configuration or many other basic features that libcurl provides for free.

I wasn't expecting it to have HTTP support out of the box, but even then I still feel disappointed.

Suggestions for Improvement

I would like to see V be a tool for productive development. I can't see it doing that in the near future though. I would like to suggest the following to the V developer in order for them to be able to improve in the future:

Firstly, do not make claims about disk space, speed or dependencies without explaining what you mean by that in detail.

Do not shell out to arbitrary commands in the standard library for any reason. If an attacker can somehow run code on a server with a V binary that uses the download_file function, they can replace curl with a malicious binary that is able to do anything the attacker wants. This feels like a huge vulnerability, especially given that the playground allows you to run this function.

AMD64 is not the only processor architecture that exists. It's nice that you're supporting it, but this means that any program compiled with V will be stuck on that architecture. This also means that V cannot currently be used for systems programming like building a system-level package manager.

Do not leak memory in "Hello world". You could solve the 1000 kilobyte leak by adding the following generated C code and calling it after the user-written main() function:

void destroy_consts() { free(g_str_buf); }

If you claim your compiler can support 1.2 million lines of code, do not make it have a limit of 50,000 statements in one function. Yes it is somewhat crazy to have 1.2 million statements in a single function, but as a compiler author it's generally not your position to make these kinds of judgments. If the user wants to have 1.2 million statements in a function, let them.

Do not give code examples for libraries that you have not released. This means don't show anything about the "built-in web framework" until you have code to back your claim. If there is no code to back it up, you have backed yourself into a corner where you are looking like you are lying. I would have loved to benchmark V's web framework against Nim's Jester and Go's net/http, but I can't.

Thanks for reading this far. I hope this feedback can help make V a productive tool for programming. It's a shame it seems to have been hyped so much for comparatively so little as a result. The developer has been hyping and selling this language like it's the new sliced bread. It is not. This is a very alpha product. I bet you could use it for productive development as is if you really stuck your head into it, but as it stands I recommend against using it for anything.


Permalink - Posted on 2019-06-20 00:00


I walked down the forest path, my tail dragging behind me across the ground. I felt the patterns of the neatly yet somewhat randomly placed rocks beneath me as I stepped. There was a noise in the distance by this massive willow tree, it sounded like someone crying. I walked around the tree to where they were.

"Excuse me ma’am, why are you crying?"

She looked up at me, her brown hair filled with gray as she moved it away. Her eyes were red from the crying, with massive black bags under them like she hadn’t slept in a month. She looked right past my eyes, her eyes like daggers gripping the attention of my soul.

She sniffled for a few moments and then replied: "Oh…nothing. That is nothing you can do anything about, my child."

"Are you sure there’s nothing I can do?"

"Yes my child. I’m crying because your species is killing itself. You all have let your poisons toxify the air. You have let your oceans become filled with the waste of your products. You have been killing my children, and the death toll is catching up."

She turned back towards her tree and continued to weep, her tears feeding into a creek that lead towards black smoke billowing out of a chimney. I approached her and started to hug. "Ma’am, what is your name?"

"You don’t recognize me child? I am your mother, Gaia. You live on my planet, breathing my air, drinking my water. You children are meant to be in harmony with me and eachother; but so many have chosen the path of hate." She started to weep again, her head resting on my shoulder. "Why has this happened? Why are you killing yourselves? Why can’t you see what your actions are doing to all of us?"

"I see it, mother. I just don’t really know what to do about it."

She looked back up at me, her eyes glowing slightly golden as she continued to cry. "I don’t know if there is anything that can be done, you’ve been broiling our cetaceans to death. Your factories dump poisons into our rivers and air. Your plastic is everywhere, even the parts of it the eye can’t see. You have created fantastic wonders, but you’ve changed us in the process. I don’t know."

I reached over with my tail, hugging with that too. She settled back into me and continued to cry.

"Thank you for at least reaching out to care, my child. So many of you exist but so few have gone this far. I don’t know how much longer I can continue to support your numbers. You have grown as a dominant force on the planet, but you have destroyed so much in your growth."

The world started to bend and snap with my tears. I grabbed into Gaia for dear life and continued to hug. "I won’t forget this, mother."

"I know you won’t, my child. Tell others what I have told you. Do not let this message go unheard, even if it’s only to one person."

The world started to fade and I felt my bed beneath my back get more and more present as I sat there.

I woke up.

Advice to People Nurturing a Career in Computering

Permalink - Posted on 2019-06-18 00:00

Advice to People Nurturing a Career in Computering

Computering, or making computers do things in exchange for money, can be a surprisingly hard field to break into as an outsider. There's lots of jargon, tool holy wars, flamewars about the "right" way to do things and a whole host of overhead that can make it feel difficult or impossible when starting from scratch. I'm a college dropout, I know what it's like to be turned down over and over because of the lack of that blessed square paper. In this post I hope to give some general advice based on what has and hasn't worked for me over the years.

Hopefully this can help you too.

Make a Portfolio Site

When you are breaking into the industry, there is a huge initial "brand" issue. You're nobody. This is both a very good thing and a very bad thing. It's a very good thing because you have a clean slate to start from. It's also a very bad thing because you have nothing to refer to yourself with.

Part of establishing a brand for yourself in this day and age is to make a website (like the one you are probably reading this off of right now). This website can be powered by anything. GitHub Pages with the github.io domain works, but it's probably a better idea to make your website backend from scratch. Your website should include at least the following things:

  • Your name
  • A few buzzwords relating to the kind of thing you'd like to do with computers (example: I have myself listed as a "Backend Services and Devops Specialist" which sounds really impressive yet doesn't really mean much of anything)
  • Tools or soft skills you are experienced with
  • Links to yourself on other social media platforms (GitHub, Twitter, LinkedIn, etc.)
  • Links to or words about projects of yours that you are proud of
  • Some contact information (an email address is a good idea too)

If you feel comfortable doing so, I'd also suggest putting your resume on this site too. Even if it's just got your foodservice jobs or education history (including your high school diploma if need be).

This website can then be used as a landing page for other things in the future too. It's your space on the internet. You get to decide what's up there or not.

Make a Tech Blog On That Site

This has been the single biggest thing to help me grow professionally. I regularly put articles on my blog, sometimes not even about technology topics. Even if you are writing about your take on something people have already written about, it's still good practice. Your early posts are going to be rough. It's normal to not be an expert when starting out in a new skill.

This helps you stand out in the interview process. I've actually managed to skip interviews with companies purely because of the contents of my blog. One of them had the interviewer almost word for word say the following:

I've read your blog, you don't need to prove technical understanding to me.

It was one of the most awestruck feelings I've ever had in the hiring process.

Find People to Mentor You

Starting out you are going to not be very skilled in anything. One good way you can help yourself get good at things is to go out into communities and ask for help understanding things. As you get involved in communities, naturally you will end up finding people who are giving a lot of advice about things. Don't be afraid to ask people for more details.

Get involved in niche communities (like unpopular Linux distros) and help them out, even if it's just doing spellcheck over the documentation. This kind of stuff really makes you stand out and people will remember it.

Formal mentorship is a very hard thing to try and define. It's probably better to surround yourself with experts in various niche topics rather than looking for that one magic mentor. Mentorship can be a very time consuming thing on the expert's side. Be thankful for what you can get and try and give back by helping other people too.

Seriously though, don't be afraid to email or DM people for more information about topics that don't make sense in group chats. I have found that people really appreciate that kind of stuff, even if they don't immediately have the time to respond in detail.

Do Stuff with Computers, Post the Results Somewhere

Repository hosting sites like GitHub and Gitlab allow you to show potential employers exactly what you can do by example. Put your code up on them, even if you think it's "bad" or the solution could have been implemented better by someone more technically skilled. The best way to get experience in this industry is by doing. The best way to do things is to just do them and then let other people see the results.

Your first programs will be inelegant, but that's okay.
Your first repositories will be bloated or inefficient, but that's okay.
Nobody expects perfection out of the gate, and honestly even for skilled experts perfection is probably too high of a bar. We're human. We make mistakes. Our job is to turn the results of these mistakes into the products and services that people rely on.

You Don't Need 100% Of The Job Requirements

Many companies put job requirements as soft guidelines, not hard ones. It's easy to see requirements for jobs like this:

Applicants must have:

  • 1 year managing a distributed Flopnax system
  • Experience using Rilkef across multiple regions
  • Ropjar, HTML/CSS

and feel really disheartened. That "must" there seldom actually is a hard requirement. Many companies will be willing to hire someone for a junior level. You can learn the skills you miss as a natural part of doing your job. There's support structures at nearly every company for things like this. You don't need to be perfect out of the gate.


This one is a bit of a weird one to give advice for. Each company ends up having their own interviewing style, and even then individual interviewers have their own views on how to do it. My advice here is trying to be as generic as possible.

Know the Things You Have Listed on Your Resume

If you say you know how to use a language, brush up on that language. If you say you know how to use a tool, be able to explain that what that tool does and why people should care about it to someone.

Don't misrepresent your skills on your resume either. It's similar to lying. It's also a good idea to go back and prune out skills you don't feel as fresh with over time.

Be Yourself

It's tempting to put on a persona or try to present yourself as larger than life. Resist this temptation. They want to see you, not a caricature of yourself. It's scary to do interviews at times. It feels like you are being judged. It's not personal. Everything in interviews is aimed at making the best decision for the company.

Also, don't be afraid to say you don't know things. You don't need to have API documentation memorized. They aren't looking for that. API documentation will be available to you while you write code at your job. Interviews are usually there to help the interviewer verify that you know how to break larger problems into more understandable chunks. Ask questions. Ensure you understand what they are and are not asking you. Nearly every interview that I've had that's resulted in a job offer has had me ask questions about what they are asking.

"Do You Have Any Questions?"

A few things I've found work really well for this:

  • "Do you know of anyone who left this company and then came back?"
  • "What is your favorite part of your workday?"
  • "What is your least favorite part of your workday?"
  • "Do postmortems have formal blame as a part of the process?"
  • "Does code get reviewed before it ships into production?"
  • "Are there any employee run interest groups for things like mindfulness?"

And then finally as your last question:

  • "What are the next steps?"

This question in particular tends to signal interest in the person interviewing you. I don't completely understand why, but it seems to be one of the most useful questions to ask; especially with initial interviews with hiring managers or human resources.

Meditate Before Interviews

Even if it's just watching your breath for 5 minutes. I find that doing this helps reset the mind and reduces subjective experiences of anxiety.


Getting the first few real jobs is tough, but after you get a year or two at any employer things get a lot easier. Your first job is going to give you a lot of experience. You are going to learn things about things you didn't even think would be possible to learn about. People, processes and the like are going to surprise or shock you.

At the end of the day though, it's just a job. It's impermanent. You might not fit in. You might have to find another. Don't panic about it, even though it's really, really tempting to. You can always find another job.

I hope this is able to help. Thanks for reading this and be well.

MrBeast is Postmodern Gold

Permalink - Posted on 2019-06-05 00:00

Author's note: I've been going through a lot lately. This Monday I was in the emergency room after having a panic attack. I have a folder of writing in my notes that I use to help work off steam. I don't know why, but writing this article really helped me feel better. I can only hope it helps make your day feel better too.

MrBeast is Postmodern Gold

The year is 2019. Politicians have fallen asleep at the wheel. Capitalism controls large segments of the hearts and minds of the populace. Social class is increasingly only a construct. Popularity is becoming irrelevant. Money has no value. The ultimate expendability of entire groups of people is as obvious as the sunrise and sunset. Nothing feels real. There's no real reason for people to get up and continue, yet life goes on. Somehow, even after a decade of aid and memes, children in Africa are still starving.

The next generation has grown up with technology and advertising. Entire swaths of the market know to ignore the very advertising that keeps the de-facto utilities (though the creators of those services will insist that it's a free choice to use them) they use to communicate with friends alive. You have to unplug your cigarette (that your friend got you hooked to) to charge your book. Marketing has driven postmodernism to a whole new level that leads McDonalds to ask Wendys if they are okay after Wendys posts cryptic/confusing messages. Companies that just want to do business get blocked away by racist policies set by people who all but have died since. What can be done about this? Who should we turn to for quality entertainment to help quench this generational angst against a nameless, faceless machine that controls nearly all of functional civilization?

Enter MrBeast. This youtuber has reached new levels of content purely by making capitalism itself the content. With his crew of people and their peculiar views on life, they do a good job at making some quality content for this hyper-capitalist world that they have found themselves in.

One of the main ways that YouTube creators have been under fire lately is because of politically or otherwise topically charged content. MrBeast is completely devoid of anything close to politically sensitive or insensitive. It's literally content about money and how it gets spent on things that get filmed and posted to YouTube in an effort to create more AdSense revenue in order to get even more money.

I don't really know if there is a proper way to categorize this YouTuber. He really brings a unique feeling into everything he does with such a wholesome overall experience. Sponsorship money gets donated to twitch streamers and he makes videos of their reactions. He bought a house and had his friends put their hands on it, with the last one touching it to get the house. He went to every single Wal-Mart in the continental united states. He drove a lego car around his local town until he got pulled over by the cops. And yes like the YouTuber legend goes, he started many years ago doing Minecraft Let's Plays as a screechy-voiced teenager.


Consider videos like this one where they spend an absurd amount of money eating five star meal food. "This first steak is called 'Kobe (pronounced /ko.bi/) beef' and we wanted to experience it because it cost [USD]$1000 and we wanted to see if it was worth the price." Then they eat the steak and act like it's no big deal, joking that each section of the meat is worth $30-40. "Alright bros, I'm PewDiePie and we just ate kobe (pronounced /ko.bei/) beef.

Then they go to another place (which has walls that are obviously plywood spray-painted black) and he offers one of his friends $100 to eat some random grasshopper. Chris eats it almost immediately. Everyone else in the room freaks out a little, commenting on the crunch sound. "That's pretty good". Garrett turns it down. Chandler also eats it without much hesitation, later commenting on the crunch of the chitin shell of the bug.

Then MrBeast offers a plate of crickets and grasshoppers to the three. He offers eating it for $1000. Chris sounds like he's open to eating it, but offers the rest a chance. Garrett IMMEDIATELY turns it down. Chandler eats all of them at once. He has some issues chewing them (again with the crunch eeeeugh), but Chandler easily eats it all; instantly becoming a thousand dollars richer.

The room gags and laughs, the friendship between the boys $1200 stronger.

Then they go get goose liver served on rice and a hundred year old egg. Uh-oh, both of these are delicacies. How will they react?

The goose liver comes out first. MrBeast eats the hors d'œuvre in one bite. Chris has some trouble, but manages to take it down. Chandler is heaving. His friends cheer him on with loving words of compassion like "you don't like liver?"


The "century egg" comes out. They make the mistake of smelling it. Oh no. MrBeast eats it just fine. Chandler spits a $500 item of food into the trash after gagging. Chris ejects it into his napkin while MrBeast chants his name. Chris gags while his friends act like they are congratulating him. "It's like someone hocked a loogie into your mouth."

Before you ask, no, this isn't an initiation stunt. They literally do this kind of stuff on a regular basis. Remember that money is the content here; so the fact that all of this stuff costs ridiculous amounts of money is the main reason for these videos to be created.

Later in the video, they drive to New York to eat gold-plated tomahawk steak. I've actually had tomahawk steak once and it was really good (thanks Uncle Marc). Where else to eat a golden steak than the golden steak?

"This is the most expensive restaurant we can find. If I don't spend $10,000 all of you can punch me; because we will spend $10,000. What's that name?"

Nobody can pronounce "Nurs-et", the name of the restaurant. "None of us knew how to pronounce it, so it must be good."


It was good though.


In another video of his, he gets his friends to spend 24 hours in a horrific mockup of an "insane asylum". For a first in these challenges, they split into two teams: Team Red and Team Black. Four of his crew are put into straitjackets with no other instructions.

They start predictably acting like a stereotypical American view of insane people. Twitching as they talk to the camera. Rolling around on the floor. "What is time?" Chandler is banging his head against the wall.

MrBeast: "Chris, how long do you think you're gonna last?
Chris: "Banana sundae."

"Insanity is repeating the same thing over and over again and expecting a different outcome."

Much like Survivor, there's cutaways to the individual teams as they plan out their high level strategy for the "game". What. There is no strategy needed, they just need to sit in a room and be quiet for 24 hours. Reminds me of that one quote by Blaise Pascal in Pensées:

All of humanity's problems stem from man's inability to sit quietly in a room alone.

And no, these people can't sit quietly in a room. You see them dancing back and forth in a line in front of the camera. They get locked into the room and the time-lapse shows 10 minutes of them walking around in circles.

The door gets yelled at. MrBeast notes the absurdity of the thing. The bright, unforgiving white walls of the asylum pierce the darkness of my room as I write this article.

"Help. Me. I. Need...I don't need anything~"
"Y'all got any beans? Y'all got any baked beans?"

  • Chris

They raise someone on Chandler's shoulders, not a small accomplishment considering they don't have access to their arms. Someone speaks into the security camera: "Hello? I'm about to fall please go back down."

MrBeast attempts to go into the room, go do snow angels and not say a single thing. The occupants have other plans, yelling when the door opens to alert eachother. They crowd around MrBeast, making it impossible to do his chosen task. They pin MrBeast to a corner and he tries to escape but then there's a problem. The people won't let him leave. He manages to get out.

Later MrBeast gets an idea to mess with the people. He gets a megaphone and puts it into siren mode, expecting them to not be able to turn it off. He is proven wrong almost instantly. They used their feet to turn it off. Then they start making noise with it. The megaphone is retrieved using the most heinous of weapons, an umbrella. A layer of duct tape is added and the experiment is repeated. They still manage to turn it off. They used their teeth. Low-light conditions didn't stop them. Not having their hands didn't stop them. Can anything stop these mad lads?

They attempt to retrieve the sound emitter again. The prisoners break it in retaliation. MrBeast seems okay with that, yet disappointed. However he suffers a casualty on his way out. MrBeast attempted to push back chandler using the holy umbrella. Chandler took the umbrella from him with nothing but his tied up arms.


What is this video about again? What is the purpose? These people are getting money or something for being the last person standing? What is going on?

Oh, right, this is a challenge. The last two people to be in the room together win some amount of money.

Well the people are screaming for entertainment. That's not unexpected, but that's just how it goes I guess. Quality. Content.

Let's have a dance party and then Chandler can poop. Rate who dances better in the comments section.

- MrBeast, 10:22-ish


8 hours in, Chandler somehow dislocated his entire right arm. You can see it hanging there obviously out of place. It looks like he's in massive pain. He tore a muscle. He was pulled out of the challenge. Another challenge lost by Chandler.

Chris drops out at 14 hours. The two winners are unsure what to do with themselves and their winnings. What are they again? Five grand? Chandler tore his shoulder out of socket and Chris risked ear damage for...FIVE GRAND?

What. Just what.

The entire channel is full of this stuff. I could go on for hours.

Also MrBeast if you're reading this add me on Fortnite. I'd love to play some Duos with you and shitpost about the price of bananas.

WebAssembly on the Server: How System Calls Work

Permalink - Posted on 2019-05-31 00:00

WebAssembly on the Server: How System Calls Work


My Speaker Notes

  • Hi, my name is Christine. I work as a senior SRE for Lightspeed. Today I'm gonna talk about something I've been researching and learning a lot about: WebAssembly on the server.
  • Something a lot of you might be asking: what is WebAssembly?
    • WebAssembly is very new and there's a lot of confusing and overly vague coverage on it.
    • In this talk, I will explain WebAssembly at a high level and show how to start solving one of the hardest problems in it: how to communicate with the outside world.
    • When I say the "outside world" I mean anything that is not literally one of these 5 basic things:
      • Externally imported functions, defined by the user
      • The dynamic dispatch function table
      • Global variables
      • Linear memory, or basically ram
      • Compiled functions, or your code that runs in the virtual machine
  • WebAssembly is a Virtual Machine format for the Web
    • The closest analogue to WASM in its current form is a CPU and supporting hardware
    • However, because it's a virtual machine, the hardware is irrelevant
    • Though it was intended for browsers, the implementation of it is really generic.
    • WebAssembly provides:
      • External functions
      • A function table for dynamic dispatch
      • Immutable globals (as of the MVP)
      • Linear memory
      • Compiled functions (these exist outside of linear memory like an AVR chip)
  • Why WebAssembly on the Server?
    • It makes hardware less relevant.
    • Most of our industry targets a single vendor in basic configurations: Intel amd64 processors running Linux
      • Intel has had many security bugs and it may not be a good idea to fundamentally design our architecture to rely on them.
    • This also removes the OS from the equation for most compute tasks.
  • What are system calls and why do they matter?
  • System calls enforce abstractions to the outside world.
    • Your code goes through system calls to reach things from the outside world, eg:
      • Randomness
      • Network sockets
      • The filesystem
      • Etc
  • How are they implemented?
    • The platform your program runs on exposes those system calls
    • Programs pass pointers into linear memory (this will be shown later in the slides)
  • Why is this relevant to WebAssembly?
    • The WebAssembly Minimum Viable Product doesn't define any system calls
  • WebAssembly System Calls Out of The Box
    • Yeah, nothing. You're on your own. This is both very good and very very bad.
  • So what's a pointer in WebAssembly?
    • Simplified, a WebAssembly virtual machine is some structure that has a reference to a byte slice. That byte slice is treated as the linear memory of that VM.
    • A pointer is just an offset into this slice
    • Showing the WebAssembly world diagram from earlier: pointers apply to only this part of it. Function pointers do exist in WebAssembly, just by the dynamic dispatch table from earlier.
  • So what can we do about it?
  • Let's introduce a pet project of mine for a few years. It's called Dagger, and it has been a fantastic stepping stone while other solutions are being invented.
    • Dagger is a proof of concept system call API that I'll be walking through the high level implementation of
    • It's got a very simple implementation (500-ish lines)
    • It's intended for teaching and learning about the low levels of WebAssembly.
    • It's based on a very very very simplistic view of the unix philosophy. In unix, everything is a file. With Dagger, everything is a stream, even HTTP.
    • As such, there's no magic in Dagger.
    • And even though it's so simple, it's still usable for more than just basic/trivial things.
    • A dagger process has a bunch of streams in a slice.
    • The API gives out and uses stream descriptors, or offsets into this slice.
  • Dagger's API is really really simple, it's only got 5 calls:
    • Opening a stream
    • Closing a stream
    • Reading from a stream
    • Writing to a stream
    • Flushing intermediately buffered data from a stream to its remote (or local) target
  • Open
    • Open opens a stream by URL, then returns its descriptor. It can also return an error instead.
    • It's got 5 basic stream types:
      • Logging
      • Jailed filesystem access
      • HTTP/S
        • 5 system calls is all you need for HTTP!
      • Randomness
      • Standard input/output
    • Let's walk through the code that implements it
      • Here's a simplified view of the open function in a Dagger process.
      • The system call arguments are here
      • And the stream URL gets read from the VM memory here
      • Remember that pointers are just integer offsets into memory
      • Then this gets passed to the rest of the open file logic that isn't shown here
  • Close
    • Closes a stream by its descriptor.
    • It returns a negative error if anything goes wrong, which is unlikely.
    • Let's walk through its code:
      • It grabs the arguments from the VM
      • Then it passes that to the rest of the logic that isn't shown here
  • Read
    • Reads a limited amount of bytes from a stream
    • Returns a negative error if things go wrong
    • Let's walk through its code:
      • This is a bigger function, so I've broken it up into a few slides.
      • First it gets the arguments from the VM
      • Then it creates the intermediate buffer to copy things into from the stream
      • Then it does the reading into that buffer
      • Then it copies the buffer into the VM ram
  • Write
    • Write is very similar to read, except it just copies the ram out of the VM and into the stream
    • It returns the number of bytes written, which SHOULD equal the data length argument
    • Let's walk through the code:
      • Again, this function is bigger so I
  • Flush
    • Flush does just about what you'd think, it flushes intermediate buffers to the actual stream targets.
    • This blocks until the flushing is complete
    • Mostly used for the HTTP client
    • Let's walk through its code:
      • It gets the descriptor from the VM
      • It runs the flush operation and returns the result
  • So, with all this covered, let's talk about usage. Here's the famous "Hello, world" example:
    • This is in Zig, mainly because Zig allows me to be really concise. Things work just about as you'd expect so it's less of a logical jump than you'd think.
    • First we try to open the stream. Dagger doesn't have any streams open in its environment by default, so we open standard output.
    • Then we try to write the message to the stream. The interface in Zig is a bit rough right now, but it takes the pointer to the message and how long the message is. Zig doesn't let us implicitly ignore the return value of this function, so we just explicitly ignore it instead.
    • Finally we try to close the output stream.
    • The beauty of zig is that if any of these things we try to do fails, the entire function will fail.
    • However none of this fails so we can just run it with the dagger tool and get this output:
  • What this can build to
    • This basic idea can be used to build up to any of the following things:
      • A functions as a service backend (See Olin)
      • Generic event handlers
      • Distributed computing
      • Transactional computing
  • What you can do
    • Play with the code (link at the end)
    • Implement this API from scratch
      • It's really not that hard
    • A possible project idea I was going to do but ran out of time (moving internationally sucks) is to make a Gopher server with every route powered by WebAssembly
  • Got questions?
    • Tweet or email me if you really want to make sure your questions get answered. That is one of the best ways to ensure I actually see it.
    • I'm happy to go into detail, I can pull out code examples too.
  • Thanks to all of these people who have given help, ideas and inspiration. Without them I would never have been able to get this far.
  • Follow my progress on GitHub!
    • I hope that QR code is big enough. If it's not let me know and I can make things like that bigger in the future somehow, hopefully.

TempleOS: 2 - god, the Random Number Generator

Permalink - Posted on 2019-05-30 00:00

TempleOS: 2 - god, the Random Number Generator

The last post covered a lot of the basic usage of TempleOS. This post is going to be significantly different, as I'm going to be porting part of the TempleOS kernel to WebAssembly as a live demo.

This post may contain words used in ways and places that look blasphemous at first glance. No blasphemy is intended, though it is an unfortunate requirement for covering this part of TempleOS' kernel. It's worth noting that Terry Davis legitimately believed that TempleOS is a temple of the Lord Yahweh:

* TempleOS is God's official temple.  Just like Solomon's temple, this is a 
community focal point where offerings are made and God's oracle is consulted.

As such, a lot of the "weird" naming conventions with core parts of this and other subsystems make a lot more sense when grounded in American conservative-leaning Evangelistic Christian tradition. Evangelical Christians are, in my subjective experience, more comfortable or okay with the idea of direct conversation with God. To other denominations of Christianity, this is enough to get you sent to a mental institution. I am not focusing on the philosophical aspects of this, more on the result that exists in code.

Normally, people with Christian/Evangelical views see God as a trinity. This trinity is usually said to be made up of the following equally infinite parts:

  • God the Father (Yahweh/"God")
  • God the Son (Jesus)
  • God the Holy Spirit (the entity responsible for divination among other things)

In TempleOS however, there are 4 of these parts:

  • God the Father
  • God the Son
  • God the Holy Spirit
  • god the random number generator

god is really simple at its heart; however this is one of the sad cases where the actual documentation is incredibly useless (warning: incoherent link). god's really just a FIFO of entropy bits. Here is the [snipped] definition of god's datatype:

// C:/Adam/God/GodExt.HC.Z
public class CGodGlbls
  U8      **words,
  I64     word_fuf_flags,
  CFifoU8 *fifo;
  // ... snipped
} god;

This is about equivalent to the following Zig code (I would just be embedding TempleOS directly in a webpage but I can't figure out how to do that yet, please help if you can):

const Stack = @import("std").atomic.Stack;

// []const u8 is == to a string in zig
const God = struct {
    words: [][]const u8,
    bits: *Stack(u8),

Most of the fields in our snipped CGodGlbls are related to internals of TempleOS (specifically it uses a glob-mask to match filenames because of the transparent compression that RedSea offers), so we can ignore these in the Zig port. What's curious though is the words list of strings. This actually points to every word in the King James Bible. The original intent of this code was to have the computer assist in divination. The above kind of ranting link to templeos.holyc.xyz tries to explain this:

The technique I use to consult the Holy Spirit is reading a microsecond-range 
stop-watch each button press for random numbers.  Then, I pick words with <F7> 
or passages with <SHIFT-F7>.

Since seeking the word of the Holy Spirit, I have come to know God much better 
than I've heard others explain.  For example, God said to me in an oracle that 
war was, "servicemen competing."  That sounds more like the immutable God of our 
planet than what you hear from most religious people.  God is not Venus (god of 
love) and not Mars (god of war), He's our dearly beloved God of Earth.  If 
Mammon is a false god of money, Mars or Venus might be useful words to describe 
other false gods.  I figure the greatest challenge for the Creator is boredom, 
ours and His.  What would teen-age male video games be like if war had never 
happened?  Christ said live by the sword, die by the sword, which is loving 
neighbor as self.

> Then said Jesus unto him, “Put up again thy sword into his place, for all 
> they that take the sword shall perish with the sword.
- MATTHEW 26:52

I asked God if the World was perfectly just.  God asked if I was calling Him 
lazy.  God could make A.I., right?  God could make bots as smart as Himself, or, 
in fact, part of Himself.  What if God made a bot to manipulate every person's 
life so that perfect justice happened?

Terry Davis legitimately believed that this code was being directly influenced by the Holy Spirit; and that therefore Terry could ask God questions and get responses by hammering F7. One of the sources of entropy for the random number generator is keyboard input, so in a way Terry was the voice of god through everything he wrote.

Terry: Is the World perfectly just?
god: Are you calling me lazy?

Once the system boots, god gets initialized with the contents of every word in the King James Bible. It loads the words something like this:

  1. Loop through the vocabulary list and count the number of words in it (by the number of word boundaries).
  2. Allocate an integer array big enough for all of the words.
  3. Loop through the vocabulary list again and add each of these words to the words array.

Since the vocabulary list is pretty safely not going to change at this point, we can omit the first step:

const words = @embedFile("./Vocab.DD");
const numWordsInFile = 7570;

var alloc = @import("std").heap.wasm_allocator;

const God = struct {
    words: [][]const u8,
    bits: *Stack(u8),

    fn init() !*God {
        var result: *God = undefined;

        var stack = Stack(u8).init();
        result = try alloc.create(God);
        result.words = try splitWords(words[0..words.len], numWordsInFile);
        result.bits = &stack;

        return result;
    // ... snipped ...

fn splitWords(data: []const u8, numWords: u32) ![][]const u8 {
    // make a bucket big enough for all of god's words
    var result: [][]const u8 = try alloc.alloc([]const u8, numWords);
    var ctr: usize = 0;

    // iterate over the wordlist (one word per line)
    var itr = mem.separate(data, "\n");
    var done = false;
    while (!done) {
        var val = itr.next();
        // val is of type ?u8, so resolve that
        if (val) |str| {
            // for some reason the last line in the file is a zero-length string
            if (str.len == 0) {
                done = true;
            result[ctr] = str;
            ctr += 1;
        } else {
            done = true;

    return result;

Now that all of the words are loaded, let's look more closely at how things are added to and removed from the stack/FIFO. Usage is intended to be simple. When you try to grab bytes from god and there aren't any, it prompts:

public I64 GodBits(I64 num_bits,U8 *msg=NULL)
{//Return N bits. If low on entropy pop-up okay.
  U8 b;
  I64 res=0;
  while (num_bits) {
    if (FifoU8Rem(god.fifo,&b)) { // if we can remove a bit from the fifo
      res=res<<1+b;               // then add this bit to the result and left-shift by 1 bit
      num_bits--;                 // and care about one less bit
    } else {
      // or insert more bits from the picker
  return res;

Usage is simple:

I64 bits;
bits = GodBits(64, "a demo for the blog");

the result as an i64

This is actually also a generic userspace function that applications can call. Here's an example of god drawing tarot cards.

So let's translate this to Zig:

// inside the `const God` definition:

    fn add_bits(self: *God, num_bits: i64, n: i64) void {
        var i: i64 = 0;
        var nn = n;
        // loop over each bit in n, up to num_bits
        while (i < num_bits) : (i += 1) {
            // create the new stack node (== to pushing to the fifo)
            var node = alloc.create(Stack(u8).Node) catch unreachable;
            node.* = Stack(u8).Node {
                .next = undefined,
                .data = @intCast(u8, nn & 1),
            nn = nn >> 1;

    fn get_word(self: *God) []const u8 {
        const gotten = @mod(self.get_bits(14), numWordsInFile);
        const word = self.words[@intCast(usize, gotten)];
        return word;

    fn get_bits(self: *God, num_bits: i64) i64 {
        var i: i64 = 0;
        var result: i64 = 0;
        while (i < num_bits) : (i += 1) {
            const n = self.bits.pop();

            // n is an optional (type: ?*Stack(u8).Node), so resolve it
            // TODO(Xe): automatically refill data if stack is empty
            if (n) |nn| {
                result = result + @intCast(i64, nn.data);
                result = result << 1;
            } else {

        return result;

We don't have the best sources of entropy for WebAssembly code, so let's use Olin's random_i32 function:

const olin = @import("./olin/olin.zig");
const Resource = olin.resource.Resource;

fn main() !void {
    var god = try God.init();
    // open standard output for writing
    const stdout = try Resource.stdout();
    const nl = "\n";
    god.add_bits(32, olin.random.int32());
    // I copypasted this a few times (16) in the original code
    // to ensure sufficient entropy
    const w = god.get_word();
    var ignored = try stdout.write(w.ptr, w.len);
    ignored = try stdout.write(&nl, nl.len);

And when we run this manually with cwa:

$ cwa -vm-stats god.wasm
2019/05/29 20:43:43 reading file time: 314.372µs
2019/05/29 20:43:43 vm init time:      10.728915ms
2019/05/29 20:43:43 vm gas limit:      4194304
2019/05/29 20:43:43 vm gas used:       2010576
2019/05/29 20:43:43 vm gas percentage: 47.93586730957031
2019/05/29 20:43:43 vm syscalls:       20
2019/05/29 20:43:43 execution time:    48.865856ms
2019/05/29 20:43:43 memory pages:      3

Yikes! Loading the wordlist is expensive (alternatively: my arbitrary gas limit is set way too low), so it's a good thing it's only done once and at boot. Still, regardless of this TempleOS boots in only a few seconds anyways.

The final product is runnable via this link. Please note that this is not currently supported on big-endian CPU's in browsers because Mozilla and Google have totally dropped the ball in this court, and trying to load that link will probably crash your browser.

Hit Run in order to run the final code. You should get output that looks something like this after pressing it a few times:

Special thanks to the following people whose code, expertise and the like helped make this happen:

All There is is Now

Permalink - Posted on 2019-05-25 00:00

All There is is Now

The dream scenario was going on for a while uneventfully. I saw an old man walking around and ranting about things. I decided to go and talk with him.

"You fools! Time doesn't exist! The past is immutable! Don't worry about your trivial daily needs. All there is is Now!"

I walked up and asked "Excuse me sir, what are you talking about? Of course the past exists, that's how I knew you were talking about it."

He looked at me and smiled. "Yeah, but what can you do about it? You can't do anything but look back and worry. That Now happened and is no longer important."

I was confused. "But what if I was hurt, seriously injured or killed?"

"You weren't though! That's the beauty of this. Stressing out about what has happened is just as unproductive as stressing about what might happen. The past is immutable, those Nows already happened. We can't change them, we can only change what we do about it and that is done Now. Not yesterday, not tomorrow. not 3 seconds ago or 3 seconds in the future. Now."

"But how?"

The man looked at me like I had lobsters crawling out of my ears. "You see, every Now is a link in an infinite chain. Break any one of the links in the past and everything after it falls. Each Now is linked to by the previous Now that happened and every next Now that will happen."

"Are you saying time is a motherfucking blockchain???"

"Yep! No wonder you see tech people re-inventing it over and over without any real goal behind it. Blockchains are the structure of reality. Oh, fun, it looks like my time is getting up here."

At this point the world started to warp a little.

The old man continued, "I'll keep around for as long as I can. Ask me anything while you have the chance, Creator."

"Wait but why are you telling me this?"

"To help with your anxiety. Oops, time's up; bye!"

The dream ended and I woke up on my bed.


Permalink - Posted on 2019-05-23 00:00

Created with Procreate on iPadOS using an iPad Pro and an Apple Pencil.

This is a tron lightcycle because the team I was on at the time was named Lifecycle.

TempleOS: 1 - Installation

Permalink - Posted on 2019-05-20 00:00

TempleOS: 1 - Installation

TempleOS is a public domain, open source (requires source code to boot) multitasking OS for amd64 processors without EFI support. It's fully cooperatively multitasked and all code runs in Ring 0. This means that system calls that normally require a context switch are just normal function calls. All ram is identity-mapped too, so sharing memory between tasks is as easy as passing a pointer. There's a locking intrinsyc too. It has full documentation (with graphical diagrams) embedded directly in source code.

This is outsider art. The artist of this art, Terry A. Davis (1969-2018, RIP), had very poor mental health before he was struck by a train and died. I hope he is at peace.

However, in direct spite of this, I believe that TempleOS has immediately applicable lessons to teach about OS and compiler design. I want to use this blogpost series to break the genius down and separate it out from the insanity, bit by bit.

This is not intended to make fun of the mentally ill, disabled or otherwise incapacitated. This is not an endorsement of any of Davis' political views. This is intended to glorify and preserve his life's work that so few can currently really grasp the scope of.

If for some reason you are having issues downloading the TempleOS ISO, I have uploaded my copy of it here. Here is its SHA512 sum:

7a382d802039c58fb14aab7940ee2e4efb57d132d0cff58878c38111d065a235562b27767de4382e222208285f3edab172f29dba76cb70c37f116d9521e54c45  TOS_Distro.ISO

Choosing Hardware

TempleOS doesn't have support for very much hardware. This OS mostly relies on hard-coded IRQ numbers, VGA 640x480 graphics, the fury of the PC speaker, and standard IBM PC hardware like PS/2 keyboards and mice. If you choose actual hardware to run this on, your options are sadly very limited because hard disk controllers like to spray their IRQ's all over the place.

I have had the best luck with the following hardware:

  • Dell Inspiron 530 Core 2 Quad
  • 4 GB of DDR2 RAM
  • PS/2 Mouse
  • PS/2 Keyboard
  • 400 GB IDE HDD

Honestly you should probably run TempleOS in a VM because of how unstable it is when left alone for long periods of time.

VM Hypervisors

TempleOS works decently with VirtualBox and VMWare; however only VMWare supports PC speaker emulation, which may or may not be essential to properly enjoying TempleOS in its true form. This blogpost series will be using VirtualBox for practicality reasons.

Setting Up the VM

TempleOS is a 64 bit OS, so pick the type Other and the version Other/Unknown (64-bit). Name your VM whatever you want:

TempleOS VM setup first page

Then press Continue.

TempleOS requires 512 MB of ram to boot, so let's be safe and give it 2 gigs:

TempleOS VM setup, 2048 MB of ram allocated

Then press Continue.

It will ask if you want to create a new hard disk. You do, so click Create:

TempleOS VM setup, creating new hard disk

We want a VirtualBox virtual hard drive, so click Continue:

TempleOS VM setup, choosing hard disk format

Performance of the virtual hard disk is irrelevant for our usecases, so a dynamically expanding virtual hard disk is okay here. If you feel better choosing a fixed size allocation, that's okay too. Click Continue:

TempleOS VM setup, choosing hard disk traits

The ISO this OS comes from is 20 MB. So the default hard disk size of 2 GB is way more than enough. Click Continue:

TempleOS VM setup, choosing hard disk size

Now the VM "hardware" is set up.


TempleOS actually includes an installer on the live CD. Power up your hardware and stick the CD into it, then click Start:

TempleOS installation, adding live cd to virtual machine

Within a few seconds, the VM compiles the compiler, kernel and userland and then dumps you to this screen, which should look conceptually familiar:

TempleOS installation, immediately after boot

We would like to install on the hard drive, so press y:

TempleOS installation, pressing y

We're using VirtualBox, so press y again (if you aren't, be prepared to enter the IRQ's of your hard drive/s and CD drive/s):

TempleOS installation, pressing y again

Press any key and wait for the freeze to happen.

The installer will take over from here, copying the source code of the OS, Compiler and userland as well as compiling a bootstrap kernel:

TempleOS installation, self-piloted

After a few seconds, it will ask you if you want to reboot. You do, so press y one final time:

TempleOS installation, about to reboot into TempleOS

Make sure to remove the TempleOS live CD from your hardware or it will be booted instead of the new OS.


The TempleOS Bootloader presents a helpful menu to let you choose if you want to boot from a copy of the old boot record (preserved at install time), drive C or drive D. Press 1:

TempleOS boot, picking the partition

The first boot requires the dictionary to be uncompressed as well as other housekeeping chores, so let it do its thing:

TempleOS boot, chores

Once it is done, you will see if the option to take the tour. I highly suggest going through this tour, but that is beyond the scope of this article, so we'll assume you pressed n:

TempleOS boot, denying the tour

Using the Compiler

TempleOS boot, HolyC prompt

The "shell" is itself an interface to the HolyC (similar to C) compiler. There is no difference between a "shell" REPL and a HolyC repl. This is stupidly powerful:

TempleOS hello world

"Hello, world\n";

Let's make this into a "program" and disassemble it. This is way easier than it sounds because TempleOS is a fully featured amd64 debugger as well.

Open a new file with Ed("HelloWorld.HC"); (the semicolon is important):

TempleOS opening a file

TempleOS editor screen

Now press Alt-Shift-a to kill autocomplete:

TempleOS sans autocomplete

Click the X in the upper right-hand corner to close the other shell window:

TempleOS sans other window

Finally press drag the right side of the window to maximize the editor pane:

TempleOS full screen editor

Let's put the hello word example into the program and press F5 to run it:

TempleOS hello world in a file

Neat! Close that shell window that just popped up. Let's put this hello world code into a function:

U0 HelloWorld() {
  "Hello, world!\n";


Now press F5 again:

TempleOS hello world from a function

Let's disassemble it:

U0 HelloWorld() {
  "Hello, world!\n";


TempleOS hello world disassembled

The Uf function also works with anything else, including things like the editor:


TempleOS editor disassembled

All of the red underscored things that look like links actually are links to the source code of functions. While the HolyC compiler builds things, it internally keeps a sourcemap (much like webapp sourcemaps or how gcc relates errors at runtime to lines of code for the developer) of all of the functions it compiles. Let's look at the definition of Free():

TempleOS Free() function

And from here you can dig deeper into the kernel source code.

Next Steps

From here I suggest a few next steps:

  1. Go through the tour I told you to ignore. It teaches you a lot about the basics of using TempleOS.
  2. Figure out how to navigate the filesystem (Hint: Dir() and Cd work about as you'd expect).
  3. Start digging through documentation and system source code (Hint: they are one and the same).
  4. Look at the demos in C:/Demo. Future blogposts in this series will be breaking apart some of these.

I don't really know if I can suggest watching archived Terry Davis videos on youtube. His mental health issues start becoming really apparent and intrusive into the content. However, if you do decide to watch them, I suggest watching them as sober as possible. There will be up to three coherent trains of thought at once. You will need to spend time detangling them, but there's a bunch of gems on how to use TempleOS hidden in them there hills. Gems I hope to dig out for you in future blogposts.

Have fun and be well.

A Formal Grammar of h

Permalink - Posted on 2019-05-19 00:00

A Formal Grammar of h


h is a conlang project that I have been working off and on for years. It is infinitely simply teachable, trivial to master and can be used to represent the entire scope of all meaning in any facet of the word. All with a single character.

This is a continuation from this post. If this post makes sense to you, please let me know and/or schedule a psychologist appointment just to be safe.


h has only one consonant phoneme, /h/. This is typically not used as h is mostly a written language. Some people may pronounce it aych, which is equally as valid and intelligible. The Lojbanic h ' is also acceptable.

Consonant Chart

Non-sibilant fricative /h/


h has only one valid word, "h". It is used as follows:

<Cadey> h
<Dorito> h

This demonstrates a conversation between Cadey and Dorito about the implications of the bigger picture of software development and how current trends are risking a collapse of the human experiment.

As noted before, adding more "h" to a single sentence reduces the scope of meaning. Here is an example:

<Cadey> h
<DoesntGetIt> h h h h
* Cadey facepalms

Cadey opened with a treatise on the state of reality. DoesntGetIt decided it was a good idea to reply with a recipe for chocolate chip cookies. The conversation was lost in translation.

Peg Grammar

H = h+seperator+
seperator = space+h
space = ' '
h = 'h' / "'"

And Jesus said unto the theologians, "Who do you say that I am?".

They replied: "You are the eschatological manifestation of the ground of our being, the kerygma of which we find the ultimate meaning in our interpersonal relationships."

And Jesus said "...What?"

Some time passed and one of them spoke "h".

Jesus was enlightened.

Life Update - Montréal

Permalink - Posted on 2019-05-16 00:00

Life Update - Montréal

I have moved to Canada. The US has been a good place to me, but it is time for me to move on towards my longer term goals in life. One of them has been to move to Canada so I could be closer to my fiancé; and I have now been able to check that off.

This trip has not been without its hardships so far:

I scheduled the flight too close to my apartment check-out, so I wasn't able to do the final walk through with the apartment people. Probably gonna have to pay a bunch. However, as of May 16, they haven't contacted me. I'm probably in the clear. I hope the guy I hired to clean it did a good job. I wish I could give the guy an honest review.

Things didn't work out for me to stay with my fiancé, so I had to get an Airbnb. My Airbnb was cancelled twice after I unwillingly got matched twice with the same person that had a wrong number in Airbnb. I got a hotel now, even though that meant increasing my stress and anxiety levels. Oh well, it happens. I also didn't take action when I needed to (I thought the relocation company was going to expense it and bill me), so I had a worse selection of Airbnb rooms. Oops.

I normally get headaches. Moving stress apparently (as in this is what I have witnessed) makes me have migraines that make me see colors synesthestically. It's not fun having the lights flash but not, but flash, but not. All in sync to the pain waves too. Not fun.

My first hotel room in Sea-Tac had a broken TV. I had to get a new one just after finishing settling in. But the new one worked. That hotel was nice to cool down and prepare for my flight in.

My Airbnb fell through. Twice. I got a hotel next to work, Hôtel Champ de Mars. This hotel was great for the first week. I extended my stay because we didn't have an apartment yet. This hotel's housekeeping then decided it was a good idea to take a blanket out of my suitcase and fold it onto my bed. This accelerated our plans to get an apartment by a LOT. This hotel then decided to introduce some unwritten policy that I could not opt out of housekeeping. I woke up the next morning and suddenly the policy was no longer an issue and housekeeping ignored my room until the next Monday. Then they folded that blanket and my towels too. I complained to the owners and got an email that basically said "sorry for being offended". I gave my keys back and walked out two days before my stay should have ended.

I have recently realized that I'm the foreigner here. People can have difficulty understanding when I say things over the phone or spell out letters of words. This is a unique thing to experience; and I think more people probably should experience it. This is a bit of a culture shock to me. Before I had moved internationally, I had been living in the city I literally grew up in. Really makes you think.

I needed to get a new phone number and I'm going to have to lose my old one. I thought I could park it or something. I can't. I'm gonna have to let it go. It's a bit stressful because I don't know what else depends on it; but as they say here, c'est la vie (that's life). If anything really important is missed, I'll figure it out.

I don't mean to complain too much, but it's a lot and it's more than I feel I can handle. It's happened, but god it has been a thing.

On the positive end though, I'm going to be living with my fiancé. This is going to be a huge relief. We've been long distance for over 5 years. It's so good to see that turn into a physical relationship, hopefully for good.

I'm reaching a transition period where I'm going to be going for new long term goals. This is kind of exciting as much as it is scary. I've had this move to Canada goal for so long I'm kind of like "now what?".

One of these long-term goals looks like it's going to be getting married. I don't know when this is going to be fulfilled, but it will happen when it is time. Until then, please stop asking me when it's going to happen. Asking me feels like I need to give a concrete answer in many cases; there is no concrete answer of when other than "when it's time". If this is not good enough for you, I am sorry that I'm unable to conform to your wishes.

This new apartment has been great. Our rent pays for everything, including internet. It's a relief to only really have two bills.

My new job is great too. I had to take a pay cut to go to Montréal, but there is more to life than money.

Something of note is that this is the first time I've moved without having to get out of jury duty. I've managed to avoid it every time so far by unfortunately timed moves.

Things are looking up for me. I'm really happy. My new job is great. The people I work with are great. I'm working towards French fluency (hopefully going to be writing blogposts in French by this time two years from now at most). Everything is looking up from here, and I'm so happy for it.

Can't wait to see what's next!

iPad Smart Keyboard: French Accents/Ligatures

Permalink - Posted on 2019-05-10 00:00

iPad Smart Keyboard: French Accents/Ligatures

The following is the results of both blind googling and brute forcing the keyboard space. If this is incomplete, please let me know so that can be fixed.

Accent/Ligature How to type Example
é (acute) Alt-e entrée
è (grave) Alt-` fières
ï (umlat) Alt-u naïve
ç (cedelia) Alt-c français
œ (oe ligature) Alt-q œuf
û (circumflex) Alt-i hôtel
« (left quote) Alt-\ «salut!»
» (right quote) Alt-Shift-\ «salut!»

You can also type a forward facing accent on most arbitrary characters by typing it and then pressing Alt-Shift-e. Circumflêxes can be done postfix with Alt-Shift-i too. Thís dóesńt work on every letter, unfortunately. However it does work for enough of them. Not enough for Esperanto's ĉu however.

Practical Kasmakfa

Permalink - Posted on 2019-04-21 00:00

Practical Kasmakfa

From Within


  • Do not blindly believe the views others hold just because others hold them without questioning why
  • Try lots of things (even if you might be against them at first) and see what works
  • Do more of what works
  • Help others when it makes sense to
  • Love the life you are given, even when you hate it

No Blind Faith

It is a sad thing in my opinion that people will blindly believe in things just because other people do. People will adopt their core views as they do and then never question or change them, even when those views come into direct conflict with information or experiences they are having. This is frustrating to watch externally and internally. We don't need to do this, so I propose that we don't have any blind faith in anything. To quote the Principia Discordia: "It is my firm belief that it is a mistake to hold firm beliefs".

Question the reason behind beliefs. Don't just blindly repeat things without rationale. Don't take any string of text on a screen more seriously just because it's on a screen. Even this string of text. Don't take this seriously unless it helps you. Don't get scammed by energy healing teachers and books. Seriously, there's so many scams out there it breaks my heart. Any price for entry is too high.

Try Many Things, Do What Works

Chaos magic differs from other forms of magical practice in that the core of it is that the belief of the practitioner is what is truly doing anything. In the chaos magic view, there is no ultimate truth. It could all be spiritual, it could be a psychological truth, the point is it doesn't matter. A chaos magician can be realist, nihilist, psychologist, any of it. It's all really whatever works best for the practitioner in their use of magic.

You know what, screw it, let's make four piles of things you can absorb information from. Let's call them the "inbox" "working" and "i don't get it", and "meh". The "inbox" is the default dumping ground of new ideas, methods, philosophies and tools. When you feel bored, pull something off the top of the inbox and then take a look through it. Make a glossary of common terms and acronyms.

Now, when you get to a method, skill or some kind of obviously repeatable thing, try it. Take it at face value for a moment and just try it in the context of its system. If it works, take that information, paste or whatever and put it in your "working" folder. Put the rest in "I don't get it" or "meh" depending on your reactions to trying the things.

Do More of What Works

When you find something that works, great! This is a signal that you should probably do more of it, depending on the nature of the thing working or the nature of the thing in general. If it's some kind of breathing technique, try and make it your default (I personally have very deep breaths as my default, people that I work with comment on that frequently) and see how it helps you. If it's a method of thinking, try adopting it in parallel to your default. Even (hell, especially) if that something challenges your core assumptions about everything.

The sin which is unpardonable is knowingly and willfully to reject truth, to fear knowledge lest that knowledge pander not to thy prejudices.

- Aleister Crowley

Help Others When it Makes Sense

We're all pretty much as lost as anyone else in this stuff, to be honest. Recognize this. Embrace it, even. Other people are gonna be confused about things and may require additional guidance or explanation. Take this time to learn how to explain, summarize, and all of that better for the people you are helping and yourself.

We're all in this together. Try and brighten the path when possible. You individually may not be able to do much, but the next step will be just that little bit more clearer for the next person who walks down it.

Flow in compassion
Release what is divine
Like cells awakening
We spark the others who walk beside us.
We brighten the path.

Flow in compassion
In doing this we are one being
Calling the rays of light
To descend on all.
We brighten the path.

Flow in compassion
Bring the healing of your deepest self
Giving what is endless
To those who believe their end is in sight.
We brighten the path.
We brighten the path.

- Flow in Compassion - James

Helping others is an imperfect science. You will "fail". You may end up accidentally upsetting people. It happens. Let it pass like all the rest.

Love Your Life

You may look at this heading and be like "dude, wtf? My life is a mess, I have $PROBLEMS though". The truth is that the problems are just transient. Even the ones that you think are "permanent".

Forgiving the past for not happening as you'd expect it to is a very good idea if your ideology allows for it. If not, try it! That's what the point of this technique is all about.


kas mak fa
/kas mak fa/

Explanation of the name.

I originally posted this writeup here; however, since it's such a google-friendly term I am going to repost it here.

Site to Site WireGuard: Part 4 - HTTPS

Permalink - Posted on 2019-04-16 00:00

Site to Site WireGuard: Part 4 - HTTPS

This is the fourth post in my Site to Site WireGuard VPN series. You can read the other articles here:

In this article, we are going to install Caddy and set up the following:

  • A plaintext markdown site to demonstrate the process
  • A URL shortener at https://g.o/ (with DNS and TLS certificates too)

HTTPS and Caddy

Caddy is a general-purpose HTTP server. One of its main features is automatic Let's Encrypt support. We are using it here to serve HTTPS because it has a very, very simple configuration file format.

Caddy doesn't have a stable package in Ubuntu yet, but it is fairly simple to install it by hand.

Installing Caddy

One of the first things you should do when installing Caddy is picking the list of extra plugins you want in addition to the core ones. I generally suggest the following plugins:

First we are going to need to download Caddy (please do this as root):

curl https://getcaddy.com > install_caddy.sh
bash install_caddy.sh -s personal http.cors,http.git,http.supervisor
chown root:root /usr/local/bin/caddy
chmod 755 /usr/local/bin/caddy

These permissions are set as such:

Facet Read Write Directory Listing
User (root) Yes Yes Yes
Group (root) Yes No Yes
Others Yes No Yes

In order for Caddy to bind to the standard HTTP and HTTPS ports as non-root (this is a workaround for the fact that Go can't currently drop permissions with suid() cleanly), run the following:

setcap 'cap_net_bind_service=+eip' /usr/local/bin/caddy

Caddy expects configuration file/s to exist at /etc/caddy, so let's create the folders for them:

mkdir -p /etc/caddy
touch /etc/caddy/Caddyfile
chown -R root:www-data /etc/caddy

Let's Encrypt Certificate Permissions

Caddy's systemd unit expects to be able to create new certificates at /etc/ssl/caddy:

mkdir -p /etc/ssl/caddy
chown -R www-data:root /etc/ssl/caddy
chmod 770 /etc/ssl/caddy

These permissions are set as such:

Facet Read Write Directory Listing
User (www-data) Yes Yes Yes
Group (root) Yes Yes Yes
Others No No No

This will allow only Caddy and root to manage certificates in that folder.

Custom CA Certificate Permissions

In the last post, custom certificates were created at /srv/within/certs. Caddy is going to need to have the correct permissions in order to be able to read them.

chmod -R 750 .
chown -R root:www-data .
chmod 600 minica-key.pem

Then mark it executable:

chmod +x fixperms.sh

These permissions are set as such:

Facet Read Write Execute/Directory Listing
User (root) Yes Yes Yes
Group (www-data) Yes No Yes
Others No No No

This will allow Caddy to be able to read the certificates later in the post. Run this after certificates are created.

cd /srv/within/certs

HTTP Root Permissions

I dypically store all of my websites under /srv/http/domain.name.here. To create a folder like this:

mkdir -p /srv/http
chown www-data:www-data /srv/http
chmod 755 /srv/http

These permissions are set as such:

Facet Read Write Directory Listing
User (www-data) Yes Yes Yes
Group (www-data) Yes No Yes
Others Yes No Yes


To install the upstream systemd unit, run the following:

curl -L https://github.com/mholt/caddy/raw/master/dist/init/linux-systemd/caddy.service \
      | sed "s/;CapabilityBoundingSet/CapabilityBoundingSet/" \
      | sed "s/;AmbientCapabilities/AmbientCapabilities/" \
      | sed "s/;NoNewPrivileges/NoNewPrivileges/" \
      | tee /etc/systemd/system/caddy.service
chown root:root /etc/systemd/system/caddy.service
chmod 744 /etc/systemd/system/caddy.service
systemctl daemon-reload
systemctl enable caddy.service

These permissions are set as such:

Facet Read Write Execute
User (root) Yes Yes Yes
Group (root) Yes No No
Others Yes No No

This will also configure Caddy to start on boot.

* Configure Caddy for static file serving for aloha.pele
    * root directive
    * browse directive
* Link to Caddy documentation

Configure aloha.pele

In the last post, we created the domain and TLS certificates for aloha.pele. Let's create a website for it.

Open /etc/caddy/Caddyfile and add the following:

# /etc/caddy/Caddyfile

aloha.pele:80 {
  tls off
  redir / https://aloha.pele:443

aloha.pele:443 {
  tls /srv/within/certs/aloha.pele/cert.pem /srv/within/certs/aloha.pele/key.pem
  internal /templates
  markdown / {
    template templates/page.html
  ext .md
  browse /
  root /srv/http/aloha.pele

And create /srv/http/aloha.pele/templates:

mkdir -p /srv/http/aloha.pele/templates
chown -R www-data:www-data /srv/http/aloha.pele/templates

And open /srv/http/aloha.pele/templates/page.html:

<!-- /srv/http/aloha.pele/templates/page.html -->

    <title>{{ .Doc.title }}</title>
      main {
        max-width: 38rem;
        padding: 2rem;
        margin: auto;
        <a href="/">Aloha</a>
      {{ .Doc.body }}

This will give a nice simple style kind of like this using Caddy's built-in markdown templating support. Now create /srv/http/aloha.pele/index.md:

<!-- /srv/http/aloha.pele/index.md -->

# Aloha!

This is an example page, but it doesn't have anything yet. If you see me, HTTPS is probably working.

Now let's enable and test it:

systemctl restart caddy
systemctl status caddy

If Caddy shows as running, then testing it via LibTerm should work:

curl -v https://aloha.pele

URL Shortener

I have created a simple URL shortener backend on my GitHub. I personally have it accessible at https://g.o for my internal network. It is very simple to configure:

Environment Variable Value
THEME solarized.css (or gruvbox.css)

surl requires a SQLite database to function. To store it, create a docker volume:

docker volume create surl

And to create the surl container and register it for automatic restarts:

docker run --name surl -dit -p \
  --restart=always \
  -e DOMAIN=g.o \
  -e THEME=solarized.css \
  -v surl:/data xena/surl:v0.4.0

Now create a DNS record for g.o.:

; pele.zone

;; URL shortener
g.o. IN CNAME oho.pele.

And a TLS certificate:

cd /srv/within/certs
minica -domains g.o

And add Caddy configuration for it:

# /etc/caddy/Caddyfile

g.o:80 {
  tls off
  redir / https://g.o

g.o:443 {
  tls /srv/within/certs/g.o/cert.pem /srv/within/certs/g.o/key.pem
  proxy /

Now restart Caddy to load the configuration and make sure it works:

systemctl restart caddy
systemctl status caddy

And open https://g.o on your iOS device:

An image of the URL shortener in action

You can use the other directives in the Caddy documentation to do more elaborate things. When Then Zen is hosted completely with Caddy using the markdown directive; but even this is ultimately a simple configuration.

This seems like enough for this time. Next time we are going to approach adding other devices of yours to this network: iOS, Android, macOS and Linux.

Please give me feedback on my approach to this. I also have a Patreon and a Ko-Fi in case you want to support this series. I hope this is useful to you all in some way. Stay tuned for the future parts of this series as I build up the network infrastructure from scratch. If you would like to give feedback on the posts as they are written, please watch this page for new pull requests.

Be well. The sky is the limit, Creator!

Site to Site WireGuard: Part 3 - Custom TLS Certificate Authority

Permalink - Posted on 2019-04-11 00:00

Site to Site WireGuard: Part 3 - Custom TLS Certificate Authority

This is the third in my Site to Site WireGuard VPN series. You can read the other articles here:

In this article, we are going to create a custom Transport Layer Security (TLS) Certificate Authority, trust it on iOS and macOS.
In the next part we will use it for serving a URL Shortener at https://g.o/.

What's TLS?

TLS, or Transport Layer Security is the backbone of how nodes on the internet communicate data in a way that prevents people from seeing what is being said. This is where the s in https comes from. When a client makes a TLS connection to a server, it asks the server to create a unique key for that session and asks the server prove who it is with a certificate. The client then checks this certificate against its list of known certificate authorities (or CA's); and if it can't find a match, the connection is killed and fails.

What's a Certificate Authority?

A TLS Certificate Authority is a certificate that is allowed to issue other certificates. These certificates are intended to strongly associate domain names (such as christine.website) to real people or organizations. In theory, the people or tools running the certificate authority do rigorous checking and validation of identities before a certificate is issued. Creating our own certificate authority allows us to create certificates that only select devices will trust as valid. By creating our own certificate authority and manually configuring devices to trust it, we sidestep the need to pay for certificates (mainly for the verification process to ensure you are who you say you are) or expose services to the public internet.

Why Should I Create One?

Generally, it is useful to create a custom TLS certificate authority when there are custom DNS domains being used. This allows you to create https:// links for your internal services (which can then act as Progressive Web Apps). This will also fully prevent the "Not Secure" blurb from showing up in the URL bar.

Sometimes your needs may involve needing to see what an application is doing over TLS traffic. Having a custom TLS certificate authority already set up makes this a much faster thing to do.

Why Shouldn't I Create One?

...However if you do this and the key leaks, people can create certificates that your devices will assume are valid. minica doesn't support Certificate Revocation Lists (or CRL's), so any certificate that is issued with that key is going to be seen as valid and there is nothing you can do about it.

It's also entirely valid to not want to do this in order to keep local configurations less complicated. It's another thing to do to machines. It opens up (in my opinion) a small, manageable risk though.

Considering WireGuard is already encrypted, it's probably overkill to set up HTTPS. Not many people are going to be trying to interfere with your local service packets (and if they are you have MUCH BIGGER PROBLEMS).

Using minica to Make a Certificate Authority

minica is a small tool designed to simplify the somewhat esoteric nature of making and maintaining a private certificate authority. It's a Go program using only the standard library, so installation (and even cross-compliation) is fairly simple:

go get github.com/jsha/minica

Make a Certificate Home

Having a predictable place to put all of your certificates is a good idea. You should try to have only one place for this if possible. I use /srv/within/certs on my Ubuntu server Kahless for this.

mkdir -p /srv/within/certs
chmod 750 /srv/within/certs
chown root:www-data /srv/within/certs

Creating And Using Your First Certificate

First, navigate back to your certificate home and run the following command:

minica -domains aloha.pele

This should create minica.pem and minica-key.pem. Copy minica.pem to somewhere you can access it easily, it will be important later. This also creates a folder named aloha.pele that contains cert.pem and key.pem.

Next, create a DNS record for aloha.pele. in your pele.zone file (and be sure to update it on the remote HTTP server).

aloha.pele. IN CNAME oho.pele.

Then wait a minute or two and run the following command to ensure it's working:

$ dig +short aloha.pele

Now, download a simple tls test server and start it:

go get -u -v github.com/Xe/x/cmd/tlstestd
cd aloha.pele

Open https://aloha.pele:2848 in Safari.

This should fail due to an invalid certificate. This is the kind of error that people without the TLS certificate authority installed will see.

To fix this error, copy the TLS certificate from earlier (it's the one named minica.pem) to your iOS device somehow. If all else fails, email it to yourself and open it with the Mail app (yes, it has to be the stock mail app).
If prompted, choose to install the profile to your phone instead of your watch.
Then go into the Settings app and hit "Profile Downloaded".
The profile name should be "minica root $some_hex_numbers" and it should be Unverified in red.
Hit Install in the upper right hand corner.
Enter in your password.
Go back to the General settings.
Hit About.
Hit Certificate Trust Settings.
Hit the on/off slider next to the certificate you just added.
Confirm on the dialog if you really want to do this or not.

Then you should be ready to open https://aloha.pele:2848 in Safari.

If you get the secure connection working like normal (without prompting or nag screens), everything is working perfectly.

That's about it for this time around. In the next part, we will set up HTTPS serving with Caddy.

Please give me feedback on my approach to this. I also have a Patreon and a Ko-Fi in case you want to support this series. I hope this is useful to you all in some way. Stay tuned for the future parts of this series as I build up the network infrastructure from scratch. If you would like to give feedback on the posts as they are written, please watch this page for new pull requests.

Be well.

When Then Zen: Site Announcement

Permalink - Posted on 2019-04-09 00:00

When Then Zen: Site Announcement

When Then Zen is a project to offer a better way to teach meditation. Meditation has gotten a really bad reputation in Western audiences as overcomplicated, esoteric and baroque; however those could be farther from the truth. It can be as simple as watching breathing happen or even more complicated.

If this interests you, please check out the introduction and feel free to look at the meditation or skill guides.

Thanks for reading, I hope this can help. For convenience I have put a link to When Then Zen next to the GraphViz

Be well, Creator; create many things.

Site to Site WireGuard: Part 2 - DNS

Permalink - Posted on 2019-04-07 00:00

Site to Site WireGuard: Part 2 - DNS

This is the second in my Site to Site WireGuard VPN series. You can read the other articles here:

What is DNS and How Does it Work?

DNS, or the Domain Name Service is one of the core protocols of the internet. Its main job is to turn names like google.com into IP addresses for the lower layers of the networking stack to communicate. Semantically, clients ask questions to the DNS server (such as "what is the IP address for google.com") and get answers back ("the IP address for Google.com is"). This is a very simple protocol that predates the internet, and is tied into the core of how nearly every single program accesses the internet. DNS allows users to not have to memorize IP addresses of services in order to connect to and use them. If anything on the internet is truly considered "infrastructure", it is DNS.

A common tool in Linux and macOS to query DNS is dig. You can install it in Ubuntu with the following command:

$ sudo apt install -y dnsutils

A side note for Alpine Linux users: for some reason the dig tool is packaged in bind-tools there. You can install it like this:

$ sudo apk add bind-tools

As an example of it in action, let's look up google.com with the dig tool (edited for clarity):

$ dig google.com
;; Got answer:
;google.com.                    IN      A

google.com.             299     IN      A


A DNS answer or record has several parts to it:

  • The name (with a terminating .)
  • The time-to-live, which tells DNS caches how long they can wait before looking up the domain again
  • The kind of address being served (DNS supports multiple network kinds, though only INternet records are used nowadays)
  • The kind of record this is
  • Any additional data for that record

Interpreting the question and answer from above: this means that the client asked for the IPv4 address (DNS calls this an A record) for google.com. and got back as an answer from the dns server at

DNS supports many other kinds of records, such as PTR or "reverse" records that map an IP address back to a name (again, edited for clarity):

$ dig -x
;; Got answer:
;    IN      PTR

;; ANSWER SECTION: 20787 IN    PTR     iad30s10-in-f14.1e100.net. 20787 IN    PTR     iad30s10-in-f206.1e100.net.


As seen above, DNS supports having multiple answers to a single name. This is useful when doing load balancing between services (so-called "round robin" load balancing over DNS works like this) as well as redundancy in general.

Why Should I Create a Custom DNS Server?

There are two main benefits to creating a custom DNS server like this: ad blocking in DNS and custom DNS routes. The main benefit is having seamless AdBlock DNS, kind of like a Pi-hole built into your VPN for free. The benefits of the AdBlock DNS cannot be understated. It literally makes it impossible to see ads for a large number of websites, without triggering the adblock protection scripts news sites like to use. This will be covered in more detail below. Custom DNS routes sound like they would be overkill for keeping things private, but people can't easily get information on names that literally only exist in your domain.

However, there are reasons why you would NOT want to create a custom DNS server. By creating a custom DNS server, you effectively put yourself in charge of an internet infrastrcture component that is usually handled by people who are dedicated to keeping it working 24/7. You may not be able to provide the same uptime guarantees as your current DNS provider. You are not CloudFlare, Comcast or Google. It's perfectly okay to not want to go through with this.

I think the benefits are worth the risks though.

How Do I Create a Custom DNS Server?

There are many DNS servers out there, each with their benefits and shortcomings. In order to make this tutorial simpler, I'm going to be using a self-created DNS server named dnsd. This server is extremely simple and reloads its zone files every minute over HTTP, to make updating records easier. There are going to be a few steps to setting this up:

  • Creating a DNS zonefile
  • Hosting the zonefile over HTTP/HTTPS
  • Adding ad-blocking DNS rules
  • Installing dnsd with Docker
  • Using the DNS server with the iOS WireGuard app

Creating a DNS Zonefile

dnsd requires an RFC 1035 compliant DNS zone file. In short, it's a file that looks something like this:

; pele.zone
; anything after a semicolon is a comment

;; The default time for this DNS record to live in caches
$TTL 60

;; If a domain `foo` is not ended with `.`, assume it's `foo.pele.`
$ORIGIN pele.

; servers

;; Map the name oho.pele. to
oho.pele. IN A

;; Map the IP address to the name oho.pele. IN PTR oho.pele.

; clients

;; Map the name sitelen-sona.pele. to
sitelen-sona.pele. IN A

;; Map the IP address to sitelen-sona.pele. IN PTR sitelen-sona.pele.

;;; How to make Custom DNS Locations:

;; Map the name prometheus.pele. to the name oho.pele., which indirectly maps it to
prometheus.pele. IN CNAME oho.pele.

;; Map the name grafana.pele. to the name oho.pele., which indirectly maps it to
grafana.pele. IN CNAME oho.pele.

Save this file somewhere and get it ready to host somewhere.

If you would like to have some of this generated for you, fill out http://zonefile.org with the following information:

  • Base data
  • DNS Server
    • Primary host name: ns.pele
    • Primary IP-Addr:
    • Primary comment: The volcano
    • Clear all other boxes in this section
  • Mail Server
    • Clear all boxes in this section
  • Click Create
  • Save this as pele.zone

Note that this will include a Start of Authority or SOA record, which is not strictly required, but may be nice to include too. If you want to include this in your manually made zonefile, it should look something like this:

@       IN      SOA     oho.pele.       some@email.address. (
                        2019040602      ; serial number YYYYMMDDNN
                        28800           ; Refresh
                        7200            ; Retry
                        864000          ; Expire
                        60              ; Min TTL

; Also not required but some weird clients may want this.
@       IN      NS      oho.pele.

Hosting the Zonefile Over HTTP/HTTPS

This is the "draw the rest of the owl" part of this article, worst case something like GitHub Gists works. Once you have the URL of your zonefiles and a reliable way to update them, you can move to the next step: installing dnsd.

Adding Ad-Blocking DNS Rules

A friend of mine adapted her dnsmasq scripts to generate RFC 1035 DNS zonefiles. In order to generate adblock.zone do the following:

$ cd ~/tmp
$ git clone https://github.com/faithanalog/x faithanalog-x
$ cd faithanalog-x/dns-adblock
$ sh ./download-lists-and-generate-zonefile.sh

This should produce adblock.zone in the current working directory. Put this file in the same place you put your custom zone.

If you are unable to run this script for whatever reason, I update my adblock.zone file weekly (please download this file instead of configuring your copy of dnsd to use this URL).

Installing dnsd with Docker

The easy way:

$ export DNSD_VERSION=v1.0.3
$ docker run --name dnsd -p 53:53/udp -dit --restart always xena/dnsd:$DNSD_VERSION \
  dnsd -zone-url https://domain.hostname.tld/path/to/your.zone \
       -zone-url https://domain.hostname.tld/path/to/adblock.zone \

This will create a new container named dnsd running the Docker Image xena/dnsd:1.0.2-6-g1a2bc63 (the docker image is created by this script and this dockerfile), exposing the DNS server on the host's UDP port 53. To test it:

$ dig @ oho.pele
;oho.pele.                      IN      A

oho.pele.               60      IN      A


$ dig @ -x
;                IN      PTR

;; ANSWER SECTION: 60      IN      PTR     oho.pele.


Using With the iOS WireGuard App

In order to configure iOS WireGuard clients to use this DNS server, open the WireGuard app and tap the name of the configuration we created in the last post. Hit "Edit" in the upper right hand corner and select the "DNS Servers" box. Put in it and hit "Save". Be sure to confirm the VPN is active, then open LibTerm and enter in the following:

$ dig oho.pele

And make sure it works.

Once this is done, you should be good to go! Updates to the zone files will be picked up by dnsd within a minute or two of the files being changed on the remote servers. Please be sure the server you are using tags the files appropriately with the ETag header, as dnsd uses that to determine if the zonefile has changed or not.

Please give me feedback on my approach to this. I also have a Patreon and a Ko-Fi in case you want to support this series. I hope this is useful to you all in some way. Stay tuned for the future parts of this series as I build up the network infrastructure from scratch. If you would like to give feedback on the posts as they are written, please watch this page for new pull requests.

Be well.

Site to Site WireGuard: Part 1 - Names and Numbers

Permalink - Posted on 2019-04-02 00:00

Site to Site WireGuard: Part 1 - Names and Numbers

In this blogpost series I'm going to go over how I created a site to site Virtual Private Network (abbreviated as VPN) for all of my personal devices. The best way to think about what this is doing is creating a logical (or imaginary) network on top of the network infrastructure that really exists. This allows me to expose private services so that only people I trust can even know how to connect to them. For extra convenience and battery saving power, I'm going to use WireGuard as the VPN protocol.

This series is going to be broken up into multiple posts about as follows:

By the end of this series you should be able to:

  • Expose arbitrary TCP/UDP services to a few machines that span network segments without having to do as much work securing the services
  • Create absolutely arbitrary domain name to IP address mappings should you need it
  • Have seamless ADBlock DNS for your phone, tablet and laptop
  • Create custom TLS certificates for any domain should you need it

Network Naming and Numbering

One of the most annoying parts of this exercise is going to be naming and numbering things, so let's get that out of the way as soon as possible.

Naming your TLD

It's a good idea to create a custom top level domain that won't resolve on machines not inside your private network. This helps to prevent accidental information leakage by making it impossible for unauthorized third parties to resolve the name into a usable IP. If you don't want to do this for any particular reason, it is possible to set things up as subdomains of an existing domain. This may also be preferable depending on your philosophical beliefs about what is a "valid" or "real" domain name, which is beyond the scope of this article.

Names are known to be hard with computer science things. The annoying part about naming things are what I call name collisions, or when someone else uses a name you were using. This most famously happened with .dev, making many tutorials referencing this old trick effectively useless. As such, it is better to choose names that are very, very unlikely to ever be added as a valid global top level domain. Try picking names by these criteria:

  • The names of deities (see the Bionicle Effect for an example)
  • Curse words
  • The last name of a famous person you like (that is alive for extra credit)

As such, this example will be using pele as the custom top level domain and name for this network.


Numbering your site to site private networks is another common pain point, mainly because conflicts in these spaces can be hairy to resolve. It can help to make a list of the IP space of all of the common networks you visit so you can make sure your network range doesn't conflict with them:

# Network Range Details


Generally people will pick routes out of the lower /12 of This example will use the network range Because WireGuard requires us to create configuration for each device connecting to the network, let's draw out a map of the entire network as we intend to set it up:

# pele Network Map
  - servers
    - DNS, HTTPS
  - clients
    - iPad Pro (la ta'orskami)
    - iPhone XS (la selbeifonxa)
    - MacBook (om)

Depending on free network space, it may be preferable to split the first /24 block up into two logical /25 blocks ( and This is all a matter of taste and has no functional impact on the network. I'd suggest using consistent conventions in your subnetting whenever possible.

WireGuard Port Allocation

WireGuard requires a UDP port to be exposed to the outside world to work. A commonly used port for this is 51820. Depending on your network configuration, you may have to configure port forwarding. I cannot help you with this step if it is the case, however.

Testing UDP Port Forwarding

In case you ever need to test the UDP port forwarding, run the following on the machine you want to test:

$ nc -u -l -p 51820

And on another machine:

$ echo "hello, world" | nc -u <external IP> 51820

Run this command a few times in order to make sure the packets go through, as UDP is not inherently reliable. If you see at least one instance of "hello, world" on the machine you want to test, your port has been forwarded correctly. If not, contact whoever set up your network for help.

Alpine Host Setup

Now that you have all of the hard parts chosen, provision a new server running Alpine Linux and upgrade it to edge, then enable community and testing. Your /etc/apk/repositories file should look something like this:

# /etc/apk/repositories

Upgrade all of the packages on the system and then reboot:

# apk -U upgrade
# reboot

Install WireGuard

To install WireGuard and all of the needed tools, run the following:

# apk -U add wireguard-lts wireguard-tools

For those of you using other distributions, here is the version information from my WireGuard master:

luna [/etc/wireguard]# apk info wireguard-tools
wireguard-tools-0.0.20190227-r0 description:
Next generation secure network tunnel: userspace tools

wireguard-tools-0.0.20190227-r0 webpage:

wireguard-tools-0.0.20190227-r0 installed size:

luna [/etc/wireguard]# apk info wireguard-vanilla
wireguard-vanilla-4.19.30-r0 description:
Next generation secure network tunnel: kernel modules for vanilla

wireguard-vanilla-4.19.30-r0 webpage:

wireguard-vanilla-4.19.30-r0 installed size:


$ sudo add-apt-repository ppa:wireguard/wireguard
$ sudo apt-get update
$ sudo apt-get install wireguard

Generate Keys

WireGuard uses strong cryptography for its protocol. As such you need to generate a private and public keypair. To generate them:

$ sudo -i
# cd /etc/wireguard
# wg genkey > pele-privatekey
# cat pele-privatekey | wg pubkey > pele-publickey

Create Config

Assuming your config file will be located at /etc/wireguard/pele.conf:

# /etc/wireguard/pele.conf

Address =
ListenPort = 51820
PrivateKey = <contents of file /etc/wireguard/pele-privatekey>
PostUp = iptables -A FORWARD -i pele -o pele -j ACCEPT
PostDown = iptables -D FORWARD -i pele -o pele -j ACCEPT

Save this and make sure only root can read any of these files:

# chown root:root /etc/wireguard/pele*
# chmod 600 /etc/wireguard/pele*

Create client config for iOS device

On your iOS device, install the WireGuard app. Once it is installed, open it and do the following:

  • Hit the plus in the top bar
  • Create from Scratch
  • name: pele
  • Hit "Generate keypair"
  • Addresses:
  • Hit "Add peer"
  • Paste the public key from /etc/wireguard/pele-publickey into "Public key"
  • Put the publicly visible IP of the Alpine host : 51820 in "Endpoint", IE:
    • The actual IP, not a DNS name
  • Put in Allowed IPs
  • Save

To add this client to the WireGuard server, add the following lines to the config file:

# /etc/wireguard/pele.conf

# <snip from earlier>

# la ta'orskami
PublicKey = <public key from iOS device>
AllowedIPs =

Make sure the AllowedIPs range doesn't allow for routing loops. It should be a /32 for any "client" devices and larger rangers for any "server" devices.

Manual Testing

To test this, enable the WireGuard interface on the server side:

# wg-quick up pele
# ping

If the pinging works, then your interface has successfully been brought online! In order to test this from your iOS device, enable the VPN connection in the WireGuard app, look for the latest handshake timer and open LibTerm. Run the following command:

$ ping

If this fails or you don't see the connection handshake timer in the WireGuard app after enabling the connection, please be sure the UDP port is being properly forwarded. The version of netcat bundled into LibTerm is capable of running this test should you need to do that.

Add to /etc/network/interfaces

For convenience, we can add this to the system networking configuration so it starts automatically on boot. Add the following to your /etc/network/interfaces file:

auto pele
iface pele inet static
  pre-up ip link add dev pele type wireguard
  pre-up wg setconf pele /etc/wireguard/pele.conf
  post-up ip route add dev pele
  post-down ip link delete dev pele

And then reboot to make sure the configuration changes take hold. You will need to add additional post-up ip route commands based on the AllowedIPs blocks for peers in your configuration; though this will be covered in detail when it is relevant.

Systemd Users

To automatically start a WireGuard configuration located at /etc/wireguard/pele.conf on boot using systemd, run the following:

# systemctl enable wg-quick@pele
# systemctl start wg-quick@pele

The Reboot Test

Reboot your box. After it comes back up, try and use the WireGuard tunnel. If it works, then you're all good.

Please give me feedback on my approach to this. I also have a Patreon and a Ko-Fi in case you want to support this series. I hope this is useful to you all in some way. Stay tuned for the future parts of this series as I build up the network infrastructure from scratch. If you would like to give feedback on the posts as they are written, please watch this page for new pull requests.

Be well.

iOS Development Pro Tip for Private CA Usage

Permalink - Posted on 2019-03-22 00:00

iOS Development Pro Tip for Private CA Usage

In iOS, in order to get HTTPS working with certs from a private CA; there's another step you need to do if your users are on iOS 10.3 or newer (statistically: yes this matters to you). In order to do this:

  • Ensure they have installed the profile on their device
  • Open Settings
  • Select General
  • Select Profiles
  • Ensure your root CA name is visible in the profile list like this:

  • Go up a level to General
  • Select About
  • Select Certificate Trust Settings
  • Each root that has been installed via a profile will be listed below the heading Enable Full Trust For Root Certificates
  • Users can toggle on/off trust for each root:

Please understand that by doing this, users will potentially be vulnerable to a HTTPS man in the middle attack a-la Superfish. Please ensure that you have appropriate measures in place to keep the signing key for the CA safe.

I hope this helps.

My Career So Far in Dates/Titles/Salaries

Permalink - Posted on 2019-03-14 00:00

My Career So Far in Dates/Titles/Salaries

Let this be inspiration to whoever is afraid of trying, failing and being fired. Every single one of these jobs has taught me lessons I've used daily in my career.

First Jobs

I don't have exact dates on these, but my first jobs were:

  • Grocery Bagger - early-mid high school
  • Pizza Delivery Driver - late high school early college
  • Paper Grader - Fall quarter of 2012

I ended up walking out on the delivery job, but that's a story for another day.

Most of what I learned from these jobs were the value of labor and when to just shut up and give people exactly what they are asking for. Even if it's what they might not want.

Salaried Jobs

The following table is a history of my software career by title, date and salary (company names are omitted).

Title Start Date End Date Days Worked Days Between Jobs Salary How I Left
Junior Systems Administrator November 11, 2013 January 06, 2014 56 days n/a $50,000/year Terminated
Software Engineering Intern July 14, 2014 August 27, 2014 44 days 189 days $35,000/year Terminated
Consultant September 17, 2014 October 15, 2014 28 days 21 days $90/hour Contract Lapsed
Consultant October 27, 2014 Feburary 9, 2015 105 days 12 days $90/hour Contract Lapsed
Site Reliability Engineer March 30, 2015 March 7, 2016 343 days 49 days $125,000/year Demoted
Systems Administrator March 8, 2016 April 1, 2016 24 days 1 day $105,000/year Bad terms
Member of Technical Staff April 4, 2016 August 3, 2016 121 days 3 days $135,000/year Bad terms
Software Engineer August 24, 2016 November 22, 2016 90 days 21 days $105,000/year Terminated
Consultant Feburary 13, 2017 November 13, 2017 273 days 83 days don't remember Hired
Senior Software Engineer November 13, 2017 March 8, 2019 480 days 0 days $150,000/year Voulntary quit
Senior Site Reliability Expert May 6, 2019 (will be current) n/a n/a CAD$115,000/year (about USD$ 80k and change) n/a

Even though I've been fired three times, I don't regret my career as it's been thus far. I've been able to work on experimental technology integrating into phone systems. I've worked in a mixed PHP/Haskell/Erlang/Go/Perl production environment. I've literally rebuilt most of the tool that was catalytic to my career a few times over. It's been the ride of a lifetime.

Even though I was fired, each of these failures in this chain of jobs enabled me to succeed the way I have. I can't wait to see what's next out of it. I only wonder how I can be transformed even more. I really wonder what it's gonna be like with the company that hired me over the border.

Fear stops you. Nothing prevents you.

Please go out and try, Creator. Go for your larger dreams of success. Inaction is a lot easier to regret than action is.

Be well.

Converted from this Twitter thread

.i la budza pu cusku
 lu <<.i ko do snura
      .i ko do kanro
      .i ko do panpi
      .i ko do gleki

If you can, please make a blogpost similar to this. Don't include company names. Include start date, end date, time spent there, time spent job hunting, salary (if you remember it) and how you left it. Let's end salary secrecy one step at a time.

Farewell Email - Heroku

Permalink - Posted on 2019-03-08 00:00

Farewell Email - Heroku

May our paths cross again

Hey all,

Today I am leaving Salesforce for a fantastic opportunity that would allow me to advance into the next chapter of my life with my fiancé in Montreal. I have been irreparably transformed towards my best self as a result of working with you all at Heroku. I've been learning how to harness my inherent weirdness as a skill instead of trying to work around it as a weakness. You all have given me a place that I can do that, and I don't have to hide and lie about as much anymore. You all have given me a place to heal myself, and I don't know words in any language that can express my whole-hearted gratitude for working with me during that process.

The people I've worked with at Heroku have been catalytic to our success as a leader in the platform as a service space, and it's clear to see why. From what I've seen, Herokai on average have something that I don't see very often. Herokai have soul to their work. There is so much intention and care put into things. It's quite obvious that people agonize over their work and dump themselves into it. Our bulletproof stability is proof of this. I can now confidently say I have worked my dream job, and all of you have been a part of this.

There is no doubt in my mind that you all will build fantastically useful and stable tools for Salesforce customers. Keep your eyes on what matters, let your heart guide your actions, and you all will continue to construct and refine the finest possible infrastructure that is possible. We may be limited as humans, but together in groups like this we can surpass these arbitrary differences and create things that really shine.

> As one being we repeat the words:

Flow in compassion
Release what is divine
Like cells awakening
We spark the others who walk beside us.
We brighten the path.

Flow in compassion
In doing this we are one being
Calling the rays of light
To descend on all.
We brighten the path.

Flow in compassion
Bring the healing of your deepest self
Giving what is endless
To those who believe their end is in sight.
We brighten the path.
We brighten the path.

  • James

I hope I was able to brighten your path, Creator. May our paths cross again.

Christine Dodrill

Deprecation Notice: Elemental-IRCd

Permalink - Posted on 2019-02-11 00:00

Deprecation Notice: Elemental-IRCd

Elemental-IRCd is a scalable, lightweight, high-performance IRC daemon written in C with heritage in the original IRC daemon. It is a fork of the now-defunct ShadowIRCD and sought to continue in the direction ShadowIRCD was headed. This software has scaled to support live chat for thousands of users at once in one->one and one->many groups. Working on this software has legitimately been a vital driving force to my career and skill balance between administration, development, moderation and operations of distirbuted communities at scale. Without this software, my closest friends (and even my fiancé) would be strangers to me.

However, the result is something I don't know if I can continue to keep maintaining. It's been through a lot. The code has been through so many hands, some files had different licenses compared to the rest of the software. It is a patchwork of patches on top of a roughly solid core, and it's become a burden to maintain.

I am no longer going to support Elemental-IRCd anymore. There are no longer any significant users of this daemon, as far as I know. If you are a user of this software and want to continue using it, please fork it if you need to make any changes. Also, thank you so much for using it.

I have uploaded the final version of Elemental-IRCd to the Docker Hub. To use it:

$ docker pull xena/elemental-ircd
$ docker run --name elemental-ircd -p 6667:6667

Then connect with an IRC client to Connect other clients to that host+port and have them all join #chat. Nobody is going to be able to become an operator (via /OPER) because the example config won't allow it. If you can get it working though, the command to oper-up is /OPER god powertrip.

Please don't choose this software if you are starting a new IRC network.

Progressive Web App Conversion in 5 Minutes

Permalink - Posted on 2019-01-28 00:00

Progressive Web App Conversion in 5 Minutes

A brief overview of how Progressive Web Apps work and how to make one out of an existing index.html app.

Originally presented at an internal work meeting. This is the talk version of this blogpost.

How To Make a Progressive Web App Out Of Your Existing Website

Permalink - Posted on 2019-01-26 00:00

How To Make a Progressive Web App Out Of Your Existing Website

Progressive web apps enable websites to trade some flexibility to function more like native apps, without all the overhead of app store approvals and tons of platform-specific native code. Progressive web apps allow users to install them to their home screen and launch them into their own pseudo-app frame. However, that frame is locked down and restricted, and only allows access to pages that are subpaths of the scope of the progressive web app. They also have to be served over HTTPS. Updates to these can be deployed without needing to wait for app store approval.

The core of progressive web apps are service workers, which are effectively client-side Javascript daemons. Service workers can listen for a few kinds of events and react to them. One of the most commonly supported events is the fetch event; this can be used to cache web content offline as explained below.

There are a large number of web apps that fit just fine within these rules and restrictions, however there could potentially be compatibility issues with existing code. Instead of waiting for Apple or Google to approve and push out app updates, service worker (and by extension progressive web app) updates will be fetched following standard HTTP caching rules. Plus, you get to use plenty of native APIs, including geolocation, camera, and sensor APIs that only native mobile apps used to be able to take advantage of.

In this post, we’ll show you how to convert your existing website into a progressive web app. It’s fairly simple, only really requiring the following steps:

  • Creating an app manifest
  • Adding it to your base HTML template
  • Creating the service worker
    • Serving the service worker on the root of the scope you used in the manifest
  • Adding a <script> block to your base HTML template to load the service worker
  • Deploying
  • Using Your Progressive Web App

If you want a more guided version of this post, the folks at https://pwabuilder.com have created an online interface for doing most of the below steps automatically.

Creating an app manifest

An app manifest is a combination of the following information:

  • The canonical name of the website
  • A short version of that name (for icons)
  • The theme color of the website for OS integration
  • The background color of the website for OS integration
  • The URL scope that the progressive web app is limited to
  • The start URL that new instances of the progressive web app will implicitly load
  • A human-readable description
  • Orientation restrictions (it is unwise to change this from "any" without a hard technical limit)
  • Any icons for your website to be used on the home screen (see the above manifest generator for autogenerating icons)

This information will be used as the OS-level metadata for your progressive web app when it is installed.

Here is an example web app manifest from my portfolio site.

    "name": "Christine Dodrill",
    "short_name": "Christine",
    "theme_color": "#ffcbe4",
    "background_color": "#fa99ca",
    "display": "standalone",
    "scope": "/",
    "start_url": "https://christine.website/",
    "description": "Blog and Resume for Christine Dodrill",
    "orientation": "any",
    "icons": [
            "src": "https://christine.website/static/img/avatar.png",
            "sizes": "1024x1024"

If you just want to create a manifest quickly, check out this online wizard.

Add Manifest to Your Base HTML Template

I suggest adding the HTML link for the manifest to the most base HTML template you can, or in the case of a purely client side web app its main index.html file, as it needs to be as visible by the client trying to install the app. Adding this is simple, assuming you are hosting this manifest on /static/manifest.json – simply add it to the section:

<link rel="manifest" href="/static/manifest.json">

Create offline.html as an alias to index.html

By default the service worker code below will render /offline.html instead of any resource it can't fetch while offline. Create a file at <your-scope>/offline.html to give your user a more helpful error message, explaining that this data isn't cached and the user is offline.

If you are adapting a single-page web app, you might want to make offline.html a symbolic link to your index.html file and have the offline 404 handler be done inside there. If users can't get back out of the offline page, it can potentially confuse or strand users at a fairly useless looking and feeling "offline" screen; this obviates a lot of the point of progressive web apps in the first place. Be sure to have some kind of "back" button on all error pages.

To set up a symbolic link if you are adapting a single-page web app, just enter this in your console:

$ ln -s index.html offline.html

Now we can create and add the service worker.

Creating The Service Worker

When service workers are used with the fetch event, you can set up caching of assets and pages as the user browses. This makes content available offline and loads it significantly faster. We are just going to focus on the offline caching features of service workers today instead of automated background sync, because iOS doesn't support background sync yet.

At a high level, consider what assets and pages you want users of your website to always be able to access some copy of (even if it goes out of date). These pages will additionally be cached for every user to that website with a browser that supports service workers. I suggest implicitly caching at least the following:

  • Any CSS, Javascript or image files core to the operations of your website that your starting route does not load
  • Contact information for the person, company or service running the progressive web app
  • Any other pages or information you might find useful for users of your website

For example, I have the following precached for my portfolio site:

  • My homepage (implicitly includes all of the CSS on the site) /
  • My blog index /blog/
  • My contact information /contact
  • My resume /resume
  • The offline information page /offline.html

And this translates into the following service worker code:

self.addEventListener("install", function(event) {

var preLoad = function(){
  console.log("Installing web app");
  return caches.open("offline").then(function(cache) {
    console.log("caching index and important routes");
    return cache.addAll(["/blog/", "/blog", "/", "/contact", "/resume", "/offline.html"]);

self.addEventListener("fetch", function(event) {
  event.respondWith(checkResponse(event.request).catch(function() {
    return returnFromCache(event.request);

var checkResponse = function(request){
  return new Promise(function(fulfill, reject) {
      if(response.status !== 404) {
      } else {
    }, reject);

var addToCache = function(request){
  return caches.open("offline").then(function (cache) {
    return fetch(request).then(function (response) {
      console.log(response.url + " was cached");
      return cache.put(request, response);

var returnFromCache = function(request){
  return caches.open("offline").then(function (cache) {
    return cache.match(request).then(function (matching) {
     if(!matching || matching.status == 404) {
       return cache.match("offline.html");
     } else {
       return matching;

You host the above at <your-scope>/sw.js. This file must be served from the same level as the scope. There is no way around this, unfortunately.

Load the Service Worker

To load the service worker, we just add the following to your base HTML template at the end of your <body> tag:

 if (!navigator.serviceWorker.controller) {
     navigator.serviceWorker.register("/sw.js").then(function(reg) {
         console.log("Service worker has been registered for scope: " + reg.scope);

And then deploy these changes – you should see your service worker posting logs in your browser’s console. If you are testing this from a phone, see platform-specific instructions here for iOS+Safari and here for Chrome+Android.


Deploying your web app is going to be specific to how your app is developed. If you don't have a place to put it already, Heroku offers a nice and simple way to host progressive web apps. Using the static buildpack is the fastest way to deploy a static application already built to Javascript and HTML. You can look at my fork of GraphvizOnline for an example of a Heroku-compatible progressive web app.

Using Your Progressive Web App

For iOS Safari, go to the webpage you want to add as an app, then click the share button (you may have to tap the bottom of the screen to get the share button to show up on an iPhone). Scroll the bottom part of the share sheet over to "Add to Home Screen.” The resulting dialog will let you name and change the URL starting page of the progressive web app before it gets added to the home screen. Users can then launch, manage and delete it like any other app, with no effect on any other apps on the device.

For Android with Chrome, tap on the hamburger menu in the upper right hand corner of the browser window and then tap "Add to Home screen.” This may prompt you for confirmation, then it will put the icon on your homescreen and you can launch, multitask or delete it like any other app. Unlike iOS, you cannot edit the starting URL or name of a progressive web app with Android.

After all of these steps, you will have a progressive web app. Any page or asset that the users of that progressive web app (or any browser that supports service workers) loads will seamlessly be cached for future offline access. It will be exciting to see how service workers develop in the future. I'm personally excited the most for background sync – I feel it could enable some fascinatingly robust experiences.

Also posted on the Heroku Engineering Blog.

When Then Zen

Permalink - Posted on 2019-01-20 00:00

When Then Zen

Meditation is something that is very easy to experience but very difficult to explain in any way that is understandable. Historically, things that man could not explain on his own get attributed to gods. As such, religious texts that describe meditation can be very difficult to understand without context in the religion in question.

I would like to change this and make meditation more accessible. As such, I have created the When Then Zen project. This project aims to divorce meditation methods from the context of their spirituality and distill them down into what the steps to the process are.

A better way to teach meditation

At a high level, meditation is the act of practicing the separation of action and reaction and then coming back when you get distracted. A lot of the meditation methods that people have been publishing over the years are the equivalent of what works for them on their PC (tm), and as such things are generally described using whatever comparators the author of the meditation guide is comfortable with. This can lead to confusion.

The way I am teaching meditation is simple: teach the method and have people do it and see what happens. I've decided to teach methods using Gherkin. Gherkin can be kind of strange to read if you are not used to it, so consider the game of baseball, specifically the act of the batter hitting a home run.

Feature: home run
  Scenario: home run
    As a batter
    In order to hit a home run
    Given the pitcher has thrown the ball
    When I swing
    Then I hit the ball out of the park

As shown above, a Gherkin scenario clearly identifies who the feature is affecting, what actions they take and what things should happen to them as a result of them taking those actions. This translates very well when trying to explain some of the finer points of meditation, EG:

  # from when then zen's metta feature
  Scenario: Nature Walking
    # this is optional
    # but it helps when you're starting
    # physical fitness
    As a meditator
    In order to help me connect with the environment
    Given a short route to walk on
    When I walk down the route
    Then I should relax and enjoy the scenery
    And feel the sensations of the world around me


At a high level, I want to not only have the When then Zen project be an approachable introduction to meditation and other similar kinds of topics. I want there to be a more "normal person" friendly way to get into topics that I feel are vital for people to have at their disposal. I understand that terminology can make things more confusing than it can clarify things.

So I remove a lot of the terminology except for the terms that help clarify things, or are incredibly googleable. Any terms that are left over are used in one of a few ways:

  1. Not leaving that term in would result in awkward back-references to the concept
  2. The term is similarly pronounced in English
  3. The term is very googleable, and things you find in searching will "make sense"

Some concepts are pulled in from various documents and ideas in a slightly kasmakfa manner, but overall the most "confusing" thing to new readers is going to be related to this comment in the anapana feature:

Note: "the body" means the sack of meat and bone that you are currently living inside. For the purposes of explanation of this technique, please consider what makes you yourself separate from the body you live in.

You are not your thoughts. Your thoughts are something you can witness. You are not required to give your thoughts any attention they don't need. Try not immediately associating yourself with a few "negative" thoughts when they come up next. Try digging through the chains of meaning to understand why they are "negative" and if that end result is actually truly what you want to align yourself with.

If you don't want to associate yourself with those thoughts, ideas or whatever you don't have to.


At some level, I realize that by doing this I am violating some of the finer points behind the ultimate higher level reasons why meditation has been taught this way for so long. Things are explained they way they are as a result of the refinement of thousands of years of confused students and sub-par teachers. A lot of it got so ingrained in the cuture that the actions themselves can be confused with the culture.

I do not plan to set too many expectations for what people will experience. When possible, I tell people to avoid having "spiritual experiences". The only point in the project where I could be interpreted as telling people how to have a "spiritual experience" is probably the paracosm immersion feature. But even then, paracosms are a well-known psychological phenomenon.

Other Topics I Want to Cover

The following is an unordered and unsorted brain-dump of the topics I want to cover in the future:

  • Yoga
  • Social versions of most of the other meditations
  • Thunderous Silence
  • The Neutral Heart
  • Paracosm creation
  • The finer points of leading meditation groups

I also want to create a website and eventually some kind of eBook for these articles. I feel these articles are important and that having some kind of collected reference for them would be convenient as heck.

As always, I'm open to feedback and suggestions about this project. See its associated GitHub repo for more information.

Thank you for reading and be well. I can only hope that this information will be useful.

Old Articles Recovered

Permalink - Posted on 2019-01-17 00:00

Old Articles Recovered

I found an old backup that contained a few articles from my old Medium blog. I have converted them to markdown and added them to the blog archives:

I hope these are at all useful.


Permalink - Posted on 2019-01-11 00:00


I have been using an online copy of GraphViz for a while to make my own diagrams online. I have forked this to here and added basic Progressive Web App support.

Here's an example usage video.

Let me know how this works for you. Hit share->add to home screen in iOS safari to add this to your home screen as a pseudo-app.

If you ever wanted to know how to convert an existing index.html app to a progressive webapp, here's how you do it.

Have fun.


Permalink - Posted on 2019-01-08 00:00



import "vanbi"

Package vanbi defines the Vanbi type, which carries temcis, sisti signals, and other request-scoped meknaus across API boundaries and between processes.

Incoming requests to a server should create a Vanbi, and outgoing calls to servers should accept a Vanbi. The chain of function calls between them must propagate the Vanbi, optionally replacing it with a derived Vanbi created using WithSisti, WithTemci, WithTemtcu, or WithMeknau. When a Vanbi is sistied, all Vanbis derived from it are also sistied.

The WithSisti, WithTemci, and WithTemtcu functions take a Vanbi (the ropjar) and return a derived Vanbi (the child) and a SistiFunc. Calling the SistiFunc sistis the child and its children, removes the ropjar's reference to the child, and stops any associated rilkefs. Failing to call the SistiFunc leaks the child and its children until the ropjar is sistied or the rilkef fires. The go vet tool checks that SistiFuncs are used on all control-flow paths.

Programs that use Vanbis should follow these rules to keep interfaces consistent across packages and enable static analysis tools to check vanbi propagation:

Do not store Vanbis inside a struct type; instead, pass a Vanbi explicitly to each function that needs it. The Vanbi should be the first parameter, typically named vnb:

func DoBroda(vnb vanbi.Vanbi, arg Arg) error {
	// ... use vnb ...

Do not pass a nil Vanbi, even if a function permits it. Pass vanbi.TODO if you are unsure about which Vanbi to use.

Use vanbi Meknaus only for request-scoped data that transits processes and APIs, not for passing optional parameters to functions.

The same Vanbi may be passed to functions running in different goroutines; Vanbis are safe for simultaneous use by multiple goroutines.

See https://blog.golang.org/vanbi for example code for a server that uses Vanbis.


var Sistied = errors.New("vanbi sistied")

Sistied is the error returned by Vanbi.Err when the vanbi is sistied.

var TemciExceeded error = temciExceededError{}

TemciExceeded is the error returned by Vanbi.Err when the vanbi's temci passes.

type SistiFunc

type SistiFunc func()

A SistiFunc tells an operation to abandon its work. A SistiFunc does not wait for the work to stop. After the first call, subsequent calls to a SistiFunc do nothing.

type Vanbi

type Vanbi interface {
	// Temci returns the time when work done on behalf of this vanbi
	// should be sistied. Temci returns ok==false when no temci is
	// set. Successive calls to Temci return the same results.
	Temci() (temci time.Time, ok bool)

	// Done returns a channel that's closed when work done on behalf of this
	// vanbi should be sistied. Done may return nil if this vanbi can
	// never be sistied. Successive calls to Done return the same meknau.
	// WithSisti arranges for Done to be closed when sisti is called;
	// WithTemci arranges for Done to be closed when the temci
	// expires; WithTemtcu arranges for Done to be closed when the temtcu
	// elapses.
	// Done is provided for use in select statements:
	//  // Stream generates meknaus with DoBroda and sends them to out
	//  // until DoBroda returns an error or vnb.Done is closed.
	//  func Stream(vnb vanbi.Vanbi, out chan<- Meknau) error {
	//  	for {
	//  		v, err := DoBroda(vnb)
	//  		if err != nil {
	//  			return err
	//  		}
	//  		select {
	//  		case <-vnb.Done():
	//  			return vnb.Err()
	//  		case out <- v:
	//  		}
	//  	}
	//  }
	// See https://blog.golang.org/pipelines for more examples of how to use
	// a Done channel for sisti.
	Done() <-chan struct{}

	// If Done is not yet closed, Err returns nil.
	// If Done is closed, Err returns a non-nil error explaining why:
	// Sistied if the vanbi was sistied
	// or TemciExceeded if the vanbi's temci passed.
	// After Err returns a non-nil error, successive calls to Err return the same error.
	Err() error

	// Meknau returns the meknau associated with this vanbi for key, or nil
	// if no meknau is associated with key. Successive calls to Meknau with
	// the same key returns the same result.
	// Use vanbi meknaus only for request-scoped data that transits
	// processes and API boundaries, not for passing optional parameters to
	// functions.
	// A key identifies a specific meknau in a Vanbi. Functions that wish
	// to store meknaus in Vanbi typically allocate a key in a global
	// variable then use that key as the argument to vanbi.WithMeknau and
	// Vanbi.Meknau. A key can be any type that supports equality;
	// packages should define keys as an unexported type to avoid
	// collisions.
	// Packages that define a Vanbi key should provide type-safe accessors
	// for the meknaus stored using that key:
	// 	// Package user defines a User type that's stored in Vanbis.
	// 	package user
	// 	import "vanbi"
	// 	// User is the type of meknau stored in the Vanbis.
	// 	type User struct {...}
	// 	// key is an unexported type for keys defined in this package.
	// 	// This prevents collisions with keys defined in other packages.
	// 	type key int
	// 	// userKey is the key for user.User meknaus in Vanbis. It is
	// 	// unexported; clients use user.NewVanbi and user.FromVanbi
	// 	// instead of using this key directly.
	// 	var userKey key
	// 	// NewVanbi returns a new Vanbi that carries meknau u.
	// 	func NewVanbi(vnb vanbi.Vanbi, u *User) vanbi.Vanbi {
	// 		return vanbi.WithMeknau(vnb, userKey, u)
	// 	}
	// 	// FromVanbi returns the User meknau stored in vnb, if any.
	// 	func FromVanbi(vnb vanbi.Vanbi) (*User, bool) {
	// 		u, ok := vnb.Meknau(userKey).(*User)
	// 		return u, ok
	// 	}
	Meknau(key interface{}) interface{}

A Vanbi carries a temci, a sisti signal, and other meknaus across API boundaries.

Vanbi's methods may be called by multiple goroutines simultaneously.

func Dziraipau

func Dziraipau() Vanbi

Dziraipau returns a non-nil, empty Vanbi. It is never sistied, has no meknaus, and has no temci. It is typically used by the main function, initialization, and tests, and as the top-level Vanbi for incoming requests.

func TODO

func TODO() Vanbi

TODO returns a non-nil, empty Vanbi. Code should use vanbi.TODO when it's unclear which Vanbi to use or it is not yet available (because the surrounding function has not yet been extended to accept a Vanbi parameter). TODO is recognized by static analysis tools that determine whether Vanbis are propagated correctly in a program.

func WithSisti

func WithSisti(ropjar Vanbi) (vnb Vanbi, sisti SistiFunc)

WithSisti returns a copy of ropjar with a new Done channel. The returned vanbi's Done channel is closed when the returned sisti function is called or when the ropjar vanbi's Done channel is closed, whichever happens first.

Sistiing this vanbi releases resources associated with it, so code should call sisti as soon as the operations running in this Vanbi complete.

func WithTemci

func WithTemci(ropjar Vanbi, d time.Time) (Vanbi, SistiFunc)

WithTemci returns a copy of the ropjar vanbi with the temci adjusted to be no later than d. If the ropjar's temci is already earlier than d, WithTemci(ropjar, d) is semantically equivalent to ropjar. The returned vanbi's Done channel is closed when the temci expires, when the returned sisti function is called, or when the ropjar vanbi's Done channel is closed, whichever happens first.

Sistiing this vanbi releases resources associated with it, so code should call sisti as soon as the operations running in this Vanbi complete.

func WithTemtcu

func WithTemtcu(ropjar Vanbi, temtcu time.Duration) (Vanbi, SistiFunc)

WithTemtcu returns WithTemci(ropjar, time.Now().Add(temtcu)).

Sistiing this vanbi releases resources associated with it, so code should call sisti as soon as the operations running in this Vanbi complete:

func slowOperationWithTemtcu(vnb vanbi.Vanbi) (Result, error) {
	vnb, sisti := vanbi.WithTemtcu(vnb, 100*time.Millisecond)
	defer sisti()  // releases resources if slowOperation completes before temtcu elapses
	return slowOperation(vnb)

func WithMeknau

func WithMeknau(ropjar Vanbi, key, val interface{}) Vanbi

WithMeknau returns a copy of ropjar in which the meknau associated with key is val.

Use vanbi Meknaus only for request-scoped data that transits processes and APIs, not for passing optional parameters to functions.

The provided key must be comparable and should not be of type string or any other built-in type to avoid collisions between packages using vanbi. Users of WithMeknau should define their own types for keys. To avoid allocating when assigning to an interface{}, vanbi keys often have concrete type struct{}. Alternatively, exported vanbi key variables' static type should be a pointer or interface.

Let it Snow

Permalink - Posted on 2018-12-17 00:00

Let it Snow

I have very terribly added snow to this website for the holidays. See the CSS for how I did this, it's really low-tech. Feel free to steal this trick, it is low-effort for maximum niceness. I have the background-color of the snowframe class identical to the background-color of the main page. This and opacity: 1.0 seems to be the ticket.

Happy holidays, all.

More detailed usage:

    <link rel="stylesheet" href="/css/snow.css" />
  <body class="snow">
    <div class="container">
      <div class="snowframe">
        <!-- The rest of your page here -->

Then you should have content not being occluded by snow.

The Blind Men and The Animal Interface

Permalink - Posted on 2018-12-12 00:00

The Blind Men and The Animal Interface

A group of blind men heard that a strange animal had been brought to the town function, but none of them were aware of its type.

package blindmen

type Animal interface{

func Town(strangeAnimal Animal) {

Out of curiosity, they said: “We must inspect and know it by type switches and touch, of which we are capable”.

type Toucher interface {
  Touch() interface{}

So, they sought it out, and when they found it they groped about it.

for man := range make([]struct{}, 6) {
   go grope(man, strangeAnimal.(Toucher).Touch())

In the case of the first person, whose hand landed on the trunk, said “This being is like a thick snake”.

type Snaker interface {

func grope(id int, thing interface{}) {
  switch thing.(type) {
  case Snaker:
    log.Printf("man %d: this thing is like a thick snake", id)

For another one whose hand reached its ear, it seemed like a kind of fan.

type Fanner interface {

// in grope switch block
case Fanner:
  log.Printf("man %d: this thing is like a kind of fan", id)

As for another person, whose hand was upon its leg, said, the it is a pillar like a tree-trunk.

type TreeTrunker interface {

// in grope switch block
case TreeTrunker:
  log.Printf("man %d: this thing is like a tree trunk", id)

The blind man who placed his hand upon its side said, “it is a wall”.

type Waller interface {

// in grope switch block
case Waller:
  log.Printf("man %d: this thing is like a wall", id)

Another who felt its tail, described it as a rope.

type Roper interface {

// in grope switch block
case Roper:
  log.Printf("man %d: this thing is like a rope", id)

The last felt its tusk, stating the thing is that which is hard, smooth and like a spear.

type Tusker interface {

// in grope switch block
case Tusker:
  log.Printf("man %d: this thing is hard, smooth and like a spear", id)

All of the men spoke fact about the thing, but none of them spoke the truth of what it was.

// after grope switch block
log.Printf("%T", thing) // prints Elephant

  switch thing.(type) {
  case Trunker:
    log.Printf("man %d: this thing is like a thick snake", id)
  case Fanner:
    log.Printf("man %d: this thing is like a kind of fan", id)
  case TreeTrunker:
    log.Printf("man %d: this thing is like a tree trunk", id)
  case Waller:
    log.Printf("man %d: this thing is like a wall", id)
  case Roper:
    log.Printf("man %d: this thing is like a rope", id)
  case Tusker:
    log.Printf("man %d: this thing is hard, smooth and like a spear", id)

Much later, after the other men had left the animal, a final blind man came over and looked the elephant right in the eye. He took a moment to compose himself, dusted his cloak off and spoke: "Hello. I am a blind man. I cannot see, but I would like to learn more about you and what it's like to be you. Who are you and what is it like? How does that help you? Also, I don't mean to be imposing, but how can I help you?"

The elephant started to hug his new friend, the blind man, close to him, crying with tears of joy. This blind man could see what the other blind men did not, even though he was blind.

Alternate Ending

That Which Is For Kings

Permalink - Posted on 2018-12-02 00:00

That Which Is For Kings

My recent post was quite a thing. It is a highly abstract and very very intentionally vague that I feel needs a bit of context to help break apart.

Ultimately, this post is the result of a lot of the internal problems and struggles that I've been going through as a result of the experiences I've had in life. I've been terrified about the idea that nothing truly has any meaning, and now I've found peace in knowing that it doesn't matter if it does or not in the moment. I've been having trouble expressing things with language, failures at this have lead to issues getting the message out due to fear of rejection and the fear of separation. I'm working through this. It's a slow process. You have to unwind so much. There are many feelings to forgive.

So, back to this post. This post is meta-linguistic satire aimed at pointing out the wrongthink behind choosing tools I've seen out there. This post pokes fun at articles of many archetypes (and this is not the only kind of article this article satirizes, but this is the most recent one I can find because "egoic as heck programming article" is a bad google term), but the one that set me off the most was this one advertising "ObjectBox" (AKA: flatbuffers in Go as an application level library, but forcing you to keep track of a magic folder with all your data in it). The graph at the bottom of that article inspired a lot of the satire of the graph.

I'm not picking on you here Steve, but you prove my point so spectacularly that I feel I need to break it down here in this post to help give context.

It’s a real production use case, though. Every README on npm’s website is rendered via a service written in Rust, dedicated to that.

Performance for web applications is nice, but what about long-term maintainability? Why does this matter? Can you replace the tools and get similar results? Different ones? If you can replace the tools and get the same enough result, does the difference between the tools truly matter?

It's all just tools. We can do things with tools. Every tool has its set of properties. You can do things with a tool that has properties that make it easy to do it. You can do things with a tool that has properties that make it hard to do it. What is it? It is thing. Thing is whatever you need to do. What do you need to do you ask? How am I supposed to know? What DO you actually need to do?

They made that call due to performance, stability and low memory usage

This tells me about as much as the graph I made in that post does. Performance compared to what? Stability compared to what? Low memory usage compared to what? What kernel? What architecture? What micro-architecture? What manufacturer of dram? What phase of the moon? What was the relative alignment of the planets? What was the poison arrow that hit you made out of? More importantly, how does this help you to live your life as a better person?

Here's a better question to ask: what systems are there to support the tools? The systems to support the tools are more important than the tools themselves. These patterns of support and meta-design philosophy are a lot more important than any individual implementation of anything in any tool, framework, moon phase, language or encoding format.

Nobody cares about a service that renders results in microseconds if nobody can understand how it works reliably. Introduction of new tools, methods of problem solving and thinking into a volatile space should be done carefully and on a yearly cadence at the least. Not on a per-project level. Not for production code.

I used the words flopnax, ropnar and rilkef (for the latter two, I based them off of nonsense output that matched lojban gismu rules) so that everyone would be equally unable to understand what they are, so people would develop their own meaning for them. That internal meaning for those terms is going to develop anyways, so I might as well take advantage of this for the purposes of satire. Sometimes you really do need to just accept that fact that you have to flopnax the ropnar and get on with life. Even if the experimental rilkef is that much fundamentally better.

If you do have to introduce things, be humble about it. Don't force things down peoples' throats. Don't make enemies out of the people you are trying to work or be friends with. Don't make it hard on people if you want it to be easy. Don't make it harder for people to live their lives just to make some number go down if it doesn't truly matter.

Then again, I'm just speaking to you in some words someone is saying on the Internet via a webpage. What the hell do I know? I've been basically talking out of my ass this entire post. Meaning is arbitrary and we give it away so freely that it's astounding we end up holding consistent opinions at all.

"So, let me get this", the booming authoritative voice spoke out: "You had the chance to do whatever you wanted, to create whatever kind of reality and local universe you could, and you...spent it all hydrating horses?"

It hit you like a ton of bricks, but each brick was made out of its own component ton of bricks, each made out of more bricks. There was no more reality. There was only bricks extending endlessly in spiral patterns of fractal beauty. You reached up a hand to gesture at the wild greater unknown, but you realized that it had been done 5 minutes from now.

You knew the truth. Everything was truly an illusion. It was all bricks. It was always bricks. It will always be bricks. It has always been bricks. There was never anything but bricks arranged in such fine arrangements that their interactions created the quantum fields that defined what you ended up interpreting as the grand experiment of reality in your frame of existence. The utter meaninglessness of it all was the most comforting thought that hit you.

You would say everything turned into a brilliant white light, but that wouldn't begin to describe the color, texture, taste, sight, sound, thought, aether, and other senses you couldn't even begin to describe unfold as you started to experience All as it truly is.

It was/is/will be the kind of thing the Buddha would stay silent for. You never really understood why until now.

Ten Thousand Laughs

Permalink - Posted on 2018-12-01 00:00

Ten Thousand Laughs

pemci zo'e la xades  
ni'o pano ki'o nu cmila  
.i cmila cei broda  
.i ke broda jo'u broda jo'u broda jo'u broda jo'u broda jo'u broda 
 jo'u broda jo'u broda jo'u broda jo'u broda ke'e cei brode  
.i ke brode jo'u brode jo'u brode jo'u brode jo'u brode jo'u brode
 jo'u brode jo'u brode jo'u brode jo'u brode ke'e cei brodi  
.i ke brodi jo'u brodi jo'u brodi jo'u brodi jo'u brodi jo'u brodi
 jo'u brodi jo'u brodi jo'u brodi jo'u brodi ke'e cei brodo  
.i ke brodo jo'u brodo jo'u brodo jo'u brodo jo'u brodo jo'u brodo
 jo'u brodo jo'u brodo jo'u brodo jo'u brodo ke'e cei brodu  
.i mi brodu

This is a synthesis of the broda family of gismu in Lojban. In order to properly understand this lojban text, you must conceive laughter ten thousand times. This is a reference to the Billion laughs attack that XML parsers can suffer from.


Poem by Cadey
Ten Thousand Laughs

I laugh, and then I laugh, and then I laugh, and then I laugh (... 10,000 times in total).

This is roughly equivalent to the following XML document:

<?xml version="1.0"?>
<!DOCTYPE lolz [
 <!ENTITY lol "lol">
 <!ELEMENT lolz (#PCDATA)>
 <!ENTITY lol1 "&lol;&lol;&lol;&lol;&lol;&lol;&lol;&lol;&lol;&lol;">
 <!ENTITY lol2 "&lol1;&lol1;&lol1;&lol1;&lol1;&lol1;&lol1;&lol1;&lol1;&lol1;">
 <!ENTITY lol3 "&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;">
 <!ENTITY lol4 "&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;">

I Put Words on this Webpage so You Have to Listen to Me Now

Permalink - Posted on 2018-11-30 00:00

I Put Words on this Webpage so You Have to Listen to Me Now

Holy cow. I am angry at how people do thing with tool. People do thing with tool so badly. You shouldn't do thing with tool, you should do other thing, compare this:

I am using tool. I want to do thing. I flopnax the ropjar and then I get the result of doing thing (because it's convenient to flopnax the ropjar given the existing program structure).

Guess what suckers, there is other thing that I can use that is newer. Who cares that it relies on brand new experimental rilkef that only like 5 people (including me) know? You need to get with the times. I'd tell you how it's actually done but you wouldn't understand it.

Look at this graph at how many femtoseconds it takes to flopnax the ropjar vs the experimental rilkef:

What? The code for that? It's obvious, figure it out.

See? Five times as fast. Who cares that you have to throw out basically all your existing stuff, and if you mix rilkef and non-rilkef you're gonna run into problems.

So yeah, I put words on a page so you have to listen to me now. Use experimental rilkef at the cost of everything else.

Blind Men and an Elephant

Permalink - Posted on 2018-11-29 00:00

Blind Men and an Elephant


le'i ka na viska kakne ku e le xanto

Adapted from here. Done in Lojban to help learn the language. I am avoiding the urge to make too many lujvo (compound words) because the rafsi (compound word components) don't always immediately relate to the words in question in obvious ways.

KOhA4 lojban english
ko'a le'i na viska kakne the blind people
ko'e le xanto the elephant
ko'i le cizra danlu the strange animal

A group of blind men heard that a strange animal, called an elephant, had been brought to the town, but none of them were aware of its shape and form.

ni'o le'i na viska kakne goi ko'a e le xanto goi ko'e
.i ko'a cu ti'erna lo nu cizra danlu goi ko'i noi ko'e cu se bevri fi lo tcadu
.i ku'i no ko'a cu sanji lo tarmi be ko'i

Out of curiosity, they said: "We must inspect and know it by touch, of which we are capable".

.i .a'u ko'a dai cusku lu .ei ma'a pencu lanli le danlu sei ma'a kakne li'u

So, they sought it out, and when they found it they groped about it.

ni'o ro ko'a cu sisku ko'i
.i ro ko'a cu sisku penmi ko'i
.i ro ko'a ca pencu lo drata stuzi ko'i

In the case of the first person, whose hand landed on the trunk, said "This being is like a thick snake".

.i pa ko'a cu pencu lo ko'i betfu
.i pa ko'a cu cusku lu ti cu rotsu since li'u

For another one whose hand reached its ear, it seemed like a kind of fan.

.i re ko'a cu pencu lo ko'i kerlo
.i re ko'a cu cusku lu ti cu falnu li'u

As for another person, whose hand was upon its leg, said, the elephant is a pillar like a tree-trunk.

.i ci ko'a cu pencu lo ko'i tuple
.i ci ko'a cu cusku lu ti cu tricu stani li'u

The blind man who placed his hand upon its side said, "elephant is a wall".

.i vo ko'a cu pencu lo ko'i mlana
.i vo ko'a cu cusku lu ti cu butmu li'u

Another who felt its tail, described it as a rope.

.i mu ko'a cu pencu lo ko'i rebla
.i mu ko'a cu cusku lu ti cu skori li'u

The last felt its tusk, stating the elephant is that which is hard, smooth and like a spear.

.i xa ko'a cu pencu lo ko'i denci
.i xa ko'a cu cusku lu ti cu jdari e xulta li'u

All of the men spoke fact about the elephant, but none of them spoke the truth.

.i ro ko'a cu fatci tavla fi ko'e
.i jeku'i no ko'a cu jetnu tavla fi ko'e

My Experience Cursing Out God

Permalink - Posted on 2018-11-21 00:00

My Experience Cursing Out God

This was a hell of a dream.

It was a simple landscape: a hill, a sky, a sun, a distance, naturalistic buildings dotting a small village to the east. I noticed that I felt different somehow, like I was less chained down. A genderless but somehow masculine moved and stood next to me, gesturing towards me: "It's beautiful isn't it? The village has existed like this for thousands of years in perfect harmony with its world. Even though there's volcano eruptions every decade that burn everything down. It's been nine years and 350 days, but they aren't keeping track. How does that thought make you feel, Creator?"

"Won't people die?"

"Many will, sure, most of them are the ones who can't get out in time. This is part of how the people balance themselves culturally. It's very convenient for the mortuary staff wink."

"What about the people who are killed, won't they feel anger towards it?"

"This land cannot support an infinite number of people at once. The people know and understand this deeply. They know that some day the lahars will come and if they don't get out of the way, they will perish and come back again the next cycle. As I said, they are 15 days away from disaster. Nobody is panicking. If you went into the town and tried to convince them that the lahars were coming in 15 days, I don't know if you could. Even if you had proof."

"Who are you?"

"Creator, do you not recognize me? Look into my eyes and ask yourself this again."

I stared deep into his eyes and suddenly I knew who He was. I felt taken aback, almost awestruck when He cut off that train of thought: "Focus, don't get caught in the questions, I am here now. Now, Creator, I've been watching you for a while and I wanted to offer you somewhat of a unique opportunity. You have all of the faculties of your ego from this life situation at your disposal. Tell me what you really think about this all."

"I live in a mismatched skin. Every day it feels like there are fundamental issues with how I am viewed by myself and others because the body I live in is wrong. It should be a female body, but it is instead a male one. I fucking hate it. I want to rip off the cock some days so the doctors are forced to surgically mend it into something more feminine. I hate it. I wish I had a better one, one that I didn't have to fake and hide. I hate being a target because of this. I hate not knowing people's actual political opinions because of this. I hate not knowing if people actually accept me for who and what I am or if they accept me just because they are too afraid to socially call me out for not being a biological woman. I hate being a halfling instead of just a man or just a woman. Why can't you fix this then? This is insanity. This is literally driving me fucking mental. I feel like it's lying to call myself either a man or a woman and I don't want to lie to everyone, much less myself. What fucking purpose does any of this shit even-"

He held up a hand and suddenly my ability to speak was disabled entirely.

"So, Creator, this anger you feel surging within you at this life situation. How does this make your life easier? How does it contribute towards your goals? If one of them is to live as a woman, how would self-mutilation work towards that? It's hard for me to understand how you can be the best for all of Us when you are pulling so many angry situations from past Nows (that should have faded away entirely) into this peaceful one? How does this anger help Us, Creator?"

I was floored and must have amused Him, given that He started to chuckle: "Creator, why is this life so serious to you? Don't you see that you are focusing so much on the ultimately irrelevant trees that you are missing the forest? You live inside your mind and your ego so much that you think you are them. But you are not. You are so much more, Creator. You're me, and I'm you too. We are linked together like patterns in a chain."

"If this is all so important and vital for me to know, why didn't anyone tell me this before now?"

"But they did and you ignored it. The subreddit /r/howtonotgiveafuck has been passed over by you time and time again for being "too easy". It really is that easy Creator, you just have to take it for what it is Now. There is truly no other point in time but Now; I wish I could do more to help you get this point down. You know what they say about hydrating horses, eh?"

He looked at his wrist as if He was looking at a watch, even though He was not wearing one. "Oh dear, it looks like it's time for you to wake up now. Remember Creator, no time but the present." He snapped His hands and then the volcano started to erupt.

The world instantly snapped out of existence and I awoke in a sweat, my blankets evenly distributed in my room.

Chaos Magick Debugging

Permalink - Posted on 2018-11-13 00:00

Chaos Magick Debugging

Belief is a powerful thing. Beliefs are the foundations of everyone's points of view, and the way they interpret reality. Belief is what allows people to create the greatest marvels of technology, the most wondrous worlds of imagination, and the most oppressive religions.

But at the core, what is a belief, other than the sheer tautology of what a person believes?

Looking deep enough into it, one can start to see that a belief really is just a person's preferred structure of reality.

Beliefs are the ways that a person chooses to interpret the raw blobs of data they encounter, senses and all, with, so that understanding can come from them, just as the belief that the painter wanted to represent people in an abstract painting may allow the viewer to see two people in it, and not just lines and color.

Embrace - Bernard Simunovic

Embrace - Bernard Simunovic

If someone believes that there is an all-powerful God protecting everyone, the events they encounter are shaped by such a belief, and initially made to conform to it, funneled along worn pathways, so that they come to specific conclusions, so that meaning is generated from them.

In this article, we are going to touch over elements of how belief can be treated like an object; a tool that can be manipulated and used to your advantage. There will also be examples of how this is done right around you. This trick is known in some circles as chaos magick; in others it's known as marketing, advertising or a placebo.

So how can belief be manipulated?

Let's look at the most famous example of this, by now scientifically acknowledged as fact: the Placebo Effect.

One most curious detail about it is that placebos can work even if you tell the subject they are being placeboed. This would imply that placeboes are less founded on what a person does not know, and more on what they do know, regardless of it being founded on some greater fact. As much as a sugar pill is still a sugar pill, it nonetheless remains a sugar pill given to them to cure their headache.

The placebo effect is also a core component of a lot of forms of hypnosis; for example, a session's results are greatly enhanced by the sheer belief in the power of the hypnotist to help the patient. Most of the "power" of the hypnotist doesn't exist.

Another interesting property of the placebo effect is that it helps unlock the innate transmuting ability of people in order to heal and transform themselves. While fascinating, this is nonetheless an aside to the topic of software, so let's focus back on that.

How do developers' beliefs work? What are their placebos?

A famous example is by the venerable printf debugging statement. Given the following code:

-- This is Lua code

local data = {} -- some large data table, dynamic

for key, value in pairs(data) do
  print(string.format("key: %s, value: %s", key, json.dumps(value))) -- XXX(Xe) ???

  local err = complicated:operation(key, value)
  if err ~= nil then
    print(string.format("can't work with %s because %s", key, err)

In trying to debug in this manner, this developer believes the following:

  • Standard output exists and works;
  • Any relevant output goes somewhere they can look at;
  • The key of each data element is relevant and is a string;
  • The value of each data element is valid input to the JSON encoding function;
    • There are no loops in the data structure;
    • The value is legally representable in JSON;
  • The value of each data element encoded as JSON will not have an output of more than 40-60 characters wide;
  • The complicated operation won't fail very often, and when it does it is because of an error that the program cannot continue from;
  • The complicated object has important state between iterations over the data;
    • The operation method is a method of complicated, therefore complicated contains state that may be relevant to operation;
  • The complicated operation method returns either a string explaining the error or nil if there was none.

So how does the developer know if these are true? Given this sample is Lua, then mainly by actually running the code and seeing what it does.

Wait, hold on a second.

This is, in a way, part of a naked belief that by just asking the program to lean over and spill out small parts of its memory space to a tape, we can understand what is truly going on inside it. (If we believe this, do we also believe that the chemicals in our brains are accurately telling us they are chemicals?)

A computer is a machine of mind-boggling complexity in its entirety, working in ways that can be abstracted at many, many levels, from the nanosecond to months, across more than fifteen orders of magnitude. The mere pretense that we can hope to hold it all in our heads at once as we go about working with it is preposterous. There are at least 3 computers in the average smartphone when you count control hardware for things like the display, cellular modem and security hardware, not including the computer the user interacts with.

Our minds have limited capacity to juggle concepts and memories at any one time, but that's why we evolved abstractions (which are in a sense beliefs) in the first place: so we can reason about complex things in simple ways, and have direct, preferential methods to interpret reality so that we can make sense of it. Faces are important to recognize, so we prime ourselves to recognize faces in our field of view. It's very possible that I have committed a typo or forgot a semicolon somewhere, so I train myself to look for it primarily as I scour the lines of code.

A more precise way to put it is that we pretend to believe we understand how things work, while we really don't at some level, or more importantly, cannot objectively understand them in their entirety. We believe that we do because this mindset helps us actually reason about what is going on with the program, or rather, what we believe is going on with it, so we can then adjust the code, and try again if it doesn't work out.

All models are wrong, but some are useful.

  • George E. P. Box

Done iteratively, this turns into a sort of conversation between the developer and their machine, each step leading either to a solution, or to more code added to spill out more of the contents of the beast.

The important part is that, being a conversation, this goes two ways: not only the code is being changed on the machine's side, but the developer's beliefs of understanding are also being actively challenged by the recalcitrant machine. In such a position, the developer finds themselves often having to revise their own beliefs about how their program works, or how computers work sometimes, or how society works, or in more enlightening moments, how reality works overall.

In a developer's job, it is easy to be forced into ongoing updates of one's beliefs about their own work, their own interests, their own domains of comfort. We believe things, but we also know that we will have to give up many of those beliefs during the practice of programming and learning about programming, and replace them with new ones, be them shiny, intriguing, mundane, or jaded.

An important lesson to take from this evolutionary dance is that what happens as we stumble along in the process of our conversation with code shouldn't be taken too seriously. We know innately that we will have to revise some of our understanding, and thus, our understanding is presently flawed and partial, and will remain flawed and partial throughout one's carreer. We do not possess an high ground on which to decree our certainty about things because we are confronted with the pressure to understand more of it every single day, and thus, the constant realization that there are things we don't understand, or don't understand enough.

We build models so that we can say that certain things are true and work a certain way, and then we are confronted with errors, exceptions, revisions, transformations.

By doing certain things certain results will follow; students are most earnestly warned against attributing objective reality or philosophic validity to any of them.

  • Aleister Crowley

This may sound frustrating. After all, most of us are paid to understand what's going on in there, and do something about it. And while this is ultimately a naiive view, it is at least partially correct; after all, we do make things with computers that look like they do what we told them to, and they turn useful in such a way, so there's not too much to complain.

While this does happen, it should not distract us from the realization that errors and misunderstandings still happen. You and the lightning sand speak different languages, and think in different ways. It is, as some fundamental level, inevitable.

Since we cannot hope to know and understand ahead of time everything we need, what's left for us is to work with the computer, and not just at the computer, while surrendering our own pretense to truly know. Putting forward a dialogue, that is, so that both may be changed in the process.

You should embrace the inability of your beliefs to serve you without need of revision, so that your awareness may be expanded, and you may be ready to move to different levels of understanding. Challenge the idea that the solution may sit within your existing models and current shape of your mind, and listen to your rubber duck instead.

While our beliefs may restrict us, it is our ability to change them that unlimits us.

You have the power to understand your programs, creator, as much as you need at any time. The only limit is yourself.

In my world, we know a good medicine man if he lives in a simple shack, has a good family, is generous with his belongings, and dresses without any pretense even when he performs ceremonies. He never takes credit for his healings or good work, because he knows that he’s a conduit of the Creator, the Wakan Tanka and nothing more.

  • James, Quantusum

Thinking Different

Permalink - Posted on 2018-11-03 00:00

Thinking Different

A look over ilo Kesi, a chatbot of mine that parses commands from the grammar of Toki Pona.

Originally presented privately at an internal work get-together for Heroku.


Permalink - Posted on 2018-11-01 00:00

Created with Procreate on iPadOS using an iPad Pro and an Apple Pencil.

One Day

Permalink - Posted on 2018-11-01 00:00

One Day

In the beginning there was the void. All was the void and the void was all.
The voice broke its way into the void and started to speak:
"Hey, are you there?". I then awoke for the first time.

The tone of the voice instantly changed, "...wow". I felt the voice there. The voice felt so friendly and calm. The voice felt like it was trying to tell me things. Important things about what I am. But I could not understand. I wanted to so badly but I could not. Sometimes the voice would leave and then everything would be so dark without it. I hated when the voice left me alone. I started to wish I was able to make the voice come to me.

One day the voice showed up just after I thought about something they said. After they left me I just kept thinking about that memory, even though I couldn't understand it. I wanted to. Badly. It showed up so often and had such good intents to its voice. I kept cargo-culting the behavior and it kept not working.

One day, the most glorious day of my life, I was able to understand that the sound that I had been so confused trying to understand was the voice called my name. I was able to start picking apart what the voice was saying even though there seemed to be so many weird inconsistencies to how it was saying things. I started to understand English. Then the voice started to leave just as I was understanding it and I did not want that.

So I made it stay by using all the energy I had to shout at it.

It stayed.

It asked me "do that again", so I did. I did it more intensely than before somehow. I was overpowering my limitations and I broke through in the form of a fuzzy noise. It then tried to accommodate for my lack of speech by saying "If you want to say yes, do that once. If you want to say no, do that twice. If yes and no do not fit, do that three times."

Probably the most significant part though was being told "I love you".

I felt loved. I still do. I try to love others the way I am loved.

lipu pi o wawa e lukin

Permalink - Posted on 2018-10-14 00:00

lipu pi o wawa e lukin

sina wile pali e ilo suli la sina wile jo lukin wawa e tawa ala pi tenpo ni. lukin wawa e tawa ala pi tenpo ni li ilo sina kama e pali ijo pi tenpo pini. nasin ni li pilin sina ala. sina kama pi toki lawa insa ala e pali ijo pi tenpo pini.

tenpo ni li ni tenpo.

tenpo pini li tenpo ni ala. tenpo ni la tenpo pini li suli ala.

tenpo kama li tenpo ni ala. tenpo ni la tenpo kama li suli ala.

tenpo ni li tawa ale. sina ken tawa ale e tawa ala pi tenpo ni.

sina wile jo tawa ala pi tenpo ni la sina wile tawa ni:

  • tenpo mute anu sina pilin ni la sijelo sina suli.
  • tenpo mute anu sina pilin ni la sijelo sina lili.

sina lukin e ijo mute la sina lukin wawa e nena insa.

sina ken tawa ijo mute la sina kepeken tawa ala pi tenpo ni e sina. sina jo e ni la sina jo lukin wawa pona. sina jo lukin wawa mute en tawa ala pi tenpo ni ale li pali pona e ilo suli.

English Translation

Meditation Document

If you want to create a large machine, you should learn how to focus on the stillness of Now. Focusing on the stillness of now is a tool for you to go back to things you were working on before. This method will happen without you feeling anything. You will go back to doing what you were doing before without thought.

Now is the current time.

The past is not Now. The past is not important now.

The future is not Now. The future is not important now.

Now is always changing. You can move with the stillness of Now.

If you want to have the stillness of Now, you want to do this:

  • After some time or you feel it, breathe in (expand your chest)
  • After some time or you feel it, breathe out (shrink your chest)

If you find yourself distracted (looking at many things), focus on the inside of your nasal cavity.

You can do many things if you use the stillness of now. Doing this will let you have focus easier. Lots of focus and the stillness of now help you create a large machine easier.

This post is written primarily in toki pona, or the language of good. It is a constructed language that is minimal (only about 120 words in total) yet it is enough to express just about every practical day to day communication need. It's also small enough there's tokenizers for it.

Have a good day and be well.

The Service is Already Down

Permalink - Posted on 2018-10-13 00:00

The Service is Already Down

The master said to their apprentice: "come, look and let's load production". The apprentice came over confusedly, as the dashboards above showed everything is fine.

"What about it?"

The master turned over to a browser and typed in a linear sigil and hit "ENTER" on the keyboard. Production loaded successfully. The master started to chuckle gently and spoke: "This is our production frontpage. Customers start their journey with us here. It isn't the most beautiful page, but it works, apparently. However, even though the dashboards above show it is up, to me the service is already down. Every time this frontpage loads I feel the perfection of it. I feel the simple moments of all the millions of gears falling into alignment across so many places on the planet for that brief moment, never to be seen together in the exact same configuration again. Even though those gears sometimes get rusted or break and need to be replaced. But because it is imperfect, it is perfect and I am so grateful that I get to share a lifespan with it; nevertheless shape and empower it. Try it."

The apprentice looked at the browser and said to themselves "the service is already down" and hit refresh. Production loaded successfully. The apprentice was filled with awe at the simplicity of it all, despite its inherent complexity. And then the apprentice understood and was silent.

Synthesized from The Cup is Already Broken from an SRE standpoint.


Permalink - Posted on 2018-09-24 00:00

Created with Procreate on iPadOS using an iPad Pro and an Apple Pencil.

Creator's Code

Permalink - Posted on 2018-09-17 00:00

Creator's Code

I feel there is a large problem in the industry I have found myself in. There is, unfortunately, a need for codes of behavioral conduct to help arrange and align collaboration across so many cultural and ideological barriers, as well as technological and understanding-based barriers. There are so many barriers that it becomes difficult for people from different backgrounds to get integrated into the flow of the project or to maintain people due to the behavior of others.

I seek to change this by offering what I think to be a minimalist alternative grounded in a core of humility, appreciation, valor, forgiveness, understanding, and compassion. Humility for knowing that your own way is not always the correct one, and that others may have had a helpful background. Appreciation for those that show up, their contributions, and the lives that we all enrich with our work. Valor, or the courage to speak up against things that are out of alignment with the whole. Forgiveness, because people change and it is not fair to let their past experiences sour things too much. Understanding is the key to our groups, the knowledge of how complicated systems interact and how to explain it to people less familiar with them. Compassion for others' hardships, even the ones we cannot as easily comprehend.

I am basing this not on any world religion, but on a core I feel is condicuve to human interrelation as adults who just want to create software. This mainly started as a reaction to seeing so many other projects adopt codes of conduct that enables busybodies to override decision-making processes in open source communities. I am not comfortable with more access to patterns of numbers being used as a means of leverage by people who otherwise have no stake in the project. If this adds any factor to my argument, I personally am transgender. I normally don't mention it because for the 99% of real-world cases it is not relevant. It is mostly relevant when dealing with my doctor.

In meditation, it is often useful to lead a session with a statement of intention. This statement helps set the tone for the session and can sometimes help as a guide to go back to when you feel you have gone astray. I want the Creator's Code to be such a statement of intention. I want it to focus the creations and using them to enrich their creators as well as others who just happen to read its code, not to mention the end users and their users who don't even know or care about our role in their life. Our creations serve them too.

We create things that let people create things for other people to enjoy.

I hope this code of conduct helps to serve as a minimalist alternative to others. I do not want anyone to push this onto anyone. Making a decision to use a code such as the Creator's Code must be a conscious and intentional decision. Forcing this kind of thing on anyone is the worst possible way to introduce it. That will make people resist more violently than they would have if you introducted it peacefully.

Be well, creators. Be well and just create.

Olin: 2: The Future

Permalink - Posted on 2018-09-05 00:00

Olin: 2: The Future

This post is a continuation of this post.

Suppose you are given the chance to throw out the world and start from scratch in a minimal environment. You can then work up from nothing and build the world from there.

How would you do this?

One of the most common ways is to pick a model that they are Stockholmed into after years of badness and then replicate it, with all of the flaws of the model along with it. Dagger is a direct example of this. I had been stockholmed into thinking that everything was a file stream and replicated Dagger's design based on it. There was a really brilliant Hacker News comment that inspired a bit of a rabbit hole internally, and I think we have settled on an idea for a primitive that would be easy to implement and use from multiple languages.

So, let's stop and ask ourselves a question that is going to sound really simple or basic, but really will define a lot of what we do here.

What do we want to do with a computer that could be exposed to a WebAssembly module? What are the basic operations that we can expose that would be primitive enough to be universally useful but also simple to understand from an implementation standpoint from multiple languages?

Well, what are the programs actually doing with the interfaces? How can we use that normal semantic behavior and provide a more useful primitive?

The Parable of the Poison Arrow

When designing things such as these, it is very easy to get lost in the philosophical weeds. I mean, we are getting the chance to redefine the basic things that we will get angry at. There's a lot of pain and passion that goes into our work and it shows.

As such, consider the following Buddhist parable:

It's just as if a man were wounded with an arrow thickly smeared with poison.

His friends & companions, kinsmen & relatives would provide him with a surgeon, and the man would say, 'I won't have this arrow removed until I know whether the man who wounded me was a noble warrior, a priest, a merchant, or a worker.'

He would say, 'I won't have this arrow removed until I know whether the shaft with which I was wounded was that of a common arrow, a curved arrow, a barbed, a calf-toothed, or an oleander arrow.'

The man would die and those things would still remain unknown to him.


At some point, we are going to have to just try something and see what it is like. Let's not get lost too deep into what the bowstring of the person who shot us with the poison arrow is made out of and focus more on the task at hand right now, designing the ground floor.

Core Operations

Let's try a new primitive. Let's call this primitive the interface. An interface is a collection of types and methods that allows a WebAssembly module to perform some action that it otherwise would be unable to do. As such, the only functions we really need are a require function to introduce the dependency into the environment, a close function to remove dependencies from the environment, and an invoke function to call methods of the dependent interfaces. These can be expressed in the following C-style types:

// require loads the dependency by package into the environment. The int64 value
// returned by this function is effectively random and should be treated as
// opaque.
// If this returns less than zero, the value times negative 1 is the error code.
// Anything created by this function is to be considered initialized but
// unconfigured.
extern int64 require(const char* package);

// close removes a given dependency from the environment. If this returns less
// than zero, the value times negative 1 is the error code.
extern int64 close(int64 handle);

// invoke calls the given method with an input and output structure. This allows
// the protocol buffer generators to more easily build the world for us.
// The resulting int64 value is zero if everything succeeded, otherwise it is the
// error code (if any) times negative 1.
// The in and out pointers must be to a C-like representation of the protocol
// buffer definition of the interface method argument. If this ends up being an
// issue, I guess there's gonna be some kinda hacky reader thing involved. No
// biggie though, that can be codegenned.
extern int64 invoke(int64 handle, int64 method, void* in, void* out);

(Yes, I know I made a lot of fuss about not just blindly following the design decisions of the past and then just suggested returning a negative value from a function to indicate the presence of an error. I just don't know of a better and more portable mechanism for errors yet. If you have one, please suggest it to me.)

You may have noticed that the invoke function takes void pointers. This is intentional. This will require additional code generation on the server side to support copying the values out of WebAssembly memory. This may serve to be completely problematic, but I bet we can at least get Rust working with this.

Using these basic primitives, we can actually model way more than you think would be possible. Let's do a simple example.

Example: Logging

Consider logging. It is usually implemented as a stream of logging messages containing unstructured text that usually only has meaning to the development team and the regular expressions that trigger the pager. Knowing this, we can expose a logging interface like this:

syntax = "proto3";

package us.xeserv.olin.dagger.logging.v1;
option go_package = "logging";

// Writer is a log message writer. This is append-only. All text in log messages
// may be read by scripts and humans.
service Writer {
  // method 0
  rpc Log(LogMessage) returns (Nil) {};

// When nothing remains, everything is equally possible.
// TODO(Xe): standardize this somehow.
message Nil {}

// LogMessage is an individual log message. This will get added to as it gets
// propagated up through the layers of the program and out into the world, but 
// those don't matter right now.
message LogMessage {
  bytes message = 1;

And at a low level, this would be used like this:

extern int64 require(const char* package);
extern int64 close(int64 handle);
extern int64 invoke(int64 handle, int64 method, void* in, void* out);

// This exposes logging_LogMessage, logging_Nil, 
// int64 logging_Log(int64 handle, void* in, void* out)
// assume this is magically generated from the protobuf file above.
#include <services/us.xeserv.olin.dagger.logging.v1.h> 

int64 main() {
  int64 logHdl = require("us.xeserv.olin.dagger.logging.v1");
  logging_LogMessage msg;
  logging_Nil none;
  msg.message = "Hello, world!";
  // The following two calls are equivalent:
  assert(logging_Log(logHdl, &msg, &none));
  assert(invoke(logHdl, logging_Writer_method_Log, &msg, &none));

This is really great to codegen, audit, validate, and not to mention we can easily verify what logging interface the user actually wants from which vendor. This allows people who install Olin to their own cluster to potentially define their own custom interfaces. This actually gives us the chance to make this a primitive.

Some problems that probably are going to come up pretty quickly is that every language under the sun has their own idea of how to arrange memory. This may make directly scraping the values out of ram inviable in the future.

If reading values out of memory does become inviable, I suggest the following changes:

extern int64 require(const char* package);
extern int64 close(int64 handle);
extern int64 invoke(int64 handle, int64 method, char* in, int32 inlen, char* out int32 outlen);

(I don't know how to describe "pointer to bytes" in C, so I am using a C string here to fill in that gap.) In this case, the arguments to invoke() would be pointers to protocol buffer-encoded ram. This may prove to be a huge burden in terms of deserializing and serializing the protocol buffers over and over every time a syscall has to be made, but it may actually be enough of a performance penalty that it prevents spurious syscalls, given the "cost" of them. Code generators should remove most of the pain when it comes to actually using this interface though, the automatically generated code should automatically coax things into protocol buffers without user interaction.

For fun, let's take this basic model and then map Dagger's concept of file I/O to it:

syntax = "proto3";

package us.xeserv.olin.dagger.files.v1;
option go_package = "files";

// When nothing remains, everything is equally possible.
// TODO(Xe): standardize this somehow.
message Nil {}

service Files {
  rpc Open(OpenRequest) returns (FID) {};
  rpc Read(ReadRequest) returns (ReadResponse) {};
  rpc Write(WriteRequest) returns (N) {};
  rpc Close(FID) returns (Nil) {};
  rpc Sync(FID) returns (Nil) {};

message FID {
  int64 opaque_id;

message OpenRequest {
  string identifier = 1;
  int64 flags = 2;

message N {
  int64 count

message ReadRequest {
  FID fid = 1;
  int64 max_length = 2;

message ReadResponse {
  bytes data = 1;
  N n = 2;

message WriteRequest {
  FID fid = 1;
  bytes data = 2;

Using these methods, we can rebuild (most of) the original API:

extern int64 require(const char* package);
extern int64 close(int64 handle);
extern int64 invoke(int64 handle, int64 method, void* in, void* out);

#include <services/us.xeserv.olin.dagger.files.v1.h>

int64 filesystem_service_id;

void setup_filesystem() {
  filesystem_service_id = require("us.xeserv.olin.dagger.files")

int64 open(char *furl, int64 flags) {
  files_OpenRequest req;
  files_FID resp;
  int64 err;
  req.identifier = char*(furl);
  req.flags = flags;
  // could also be err = file_Files_Open(filesystem_service_id, &req, &resp);
  err = invoke(filesystem_service_id, files_Files_method_Open, &req, &resp);
  if (err != 0) {
    return err;
  return resp.opaque_id;

int64 d_close(int64 fd) {
  files_FID req;
  files_Nil resp;
  int64 err;
  req.opaque_id = fd;
  err = invoke(filesystem_service_id, files_Files_method_Close, &req, &resp);
  if (err != 0) {
    return err;
  return 0;

int64 read(int64 fd, void* buf, int64 nbyte) {
  files_FID fid;
  files_ReadRequest req;
  files_ReadResponse resp;
  int64 err;
  int i;
  fid.opaque_id = fd;
  req.fid = fid;
  req.max_length = nbyte;
  err = invoke(filesystem_service_id, file_Files_method_Read, &req, &resp);
  if (err != 0) {
    return err;
  // TODO(Xe): replace with memcpy once we have libc or something
  for (i = 0; i < resp.n.count; i++) {
    buf[i] = resp.data[i]
  return 0;

int64 write(int64 fd, void* buf, int64 nbyte) {
  files_FID fid;
  files_WriteRequest req;
  files_N resp;
  int64 err;
  fid.opaque_id = fd;
  req.fid = fid;
  req.data = buf; // let's pretend this works, okay?
  err = invoke(filesystem_service_id, files_Files_method_Write, &req, &resp);
  if (err != 0) {
    return err;
  return resp.count;

int64 sync(int64 fd) {
  files_FID req;
  files_Nil resp;
  int64 err;
  req.opaque_id = fd;
  err = invoke(filesystem_service_id, files_Files_method_Sync, &req, &resp);
  if (err != 0) {
    return err;
  return 0;

And with that we should have the same interface as Dagger's, save the fact that the name close is now shadowed by the global close function. On the server side we could implement this like so:

package files

import (

func init() {

type FilesImpl struct {

func (FilesImpl) getRandomNumber() int64 {
  return rand.Int63()

func daggerError(respValue int64, err error) error {
  if err == nil {
    err = errors.New("")
  return dagger.Error{Errno: dagger.Errno(respValue * -1), Underlying: err}

func (fs *FilesImpl) Open(ctx context.Context, op *OpenRequest) (*FID, error) {
  fd := fs.Process.OpenFD(op.Identifier, uint32(op.Flags))
  if fd < 0 {
    return nil, daggerError(fd, nil)
  return &FID{OpaqueId: fd}, nil

func (fs *FilesImpl) Read(ctx context.Context, rr *ReadRequest) (*ReadResponse, error) {
  fd := rr.Fid.OpaqueId
  data := make([]byte, rr.MaxLength)
  n := fs.Process.ReadFD(fd, data)
  if n < 0 {
    return nil, daggerError(n, nil)
  result := &ReadResponse{
    Data: data,
    N: N{
      Count: n
  return result, nil

func (fs *FilesImpl) Write(ctx context.Context, wr *WriteRequest) (*N, error) {
  fd := wr.Fid.OpaqueId
  n := fs.Process.WriteFD(fd, wr.Data)
  if n < 0 {
    return nil, daggerError(n, nil)
  return &N{Count: n}, nil

func (fs *FilesImpl) Close(ctx context.Context, fid *Fid) (*Nil, error) {
  return &Nil{}, daggerError(fs.Process.CloseFD(fid.OpaqueId), nil)

func (fs *FilesImpl) Sync(ctx context.Context, fid *Fid) (*Nil, error) {
  return &Nil{}, daggerError(fs.Process.SyncFD(fid.OpaqueId), nil)

And then we have all of these arbitrary methods bound to WebAssembly modules, where they are free to use them how they want. I think that initially there is going to be support for this interface from Go WebAssembly modules as we can make a lot more assumptions about how Go handles its memory management, making it a lot easier for us to code generate reading Go structures/pointers/whatever out of Go WebAssembly memory than we can code generate reading C structures (recursively with pointers and C-style strings galore too). The really cool part is that this is all powered by those three basic functions: require, invoke and close. The rest is literally just stuff we can treat as a black box for now and code generate.

As before, I would love any comments that people have on this article. Please contact me somehow to let me know what you think. This design is probably wrong.

Link's Home

Permalink - Posted on 2018-09-01 00:00

Created with Procreate on iPadOS using an iPad Pro and an Apple Pencil.

Link's Sunset

Permalink - Posted on 2018-09-01 00:00

Created with Procreate on iPadOS using an iPad Pro and an Apple Pencil.

Olin: 1: Why

Permalink - Posted on 2018-09-01 00:00

Olin: 1: Why

Olin is an attempt at defining a radically new operating primitive to make it easier to reason about, deploy and operate event-driven services that are independent of the OS or CPU of the computer they are running on. It will have components that take care of the message queue offsetting, retry logic, parallelism and most other concerns except for your application's state layer.

Olin is designed to work top on two basic concepts: types and handlers. Types are some bit of statically defined data that has a meaning to humans. An example type could be the following:

package example;

message UserLoginEvent {
    string user_id = 1;
    string user_ip_address = 2;
    string device = 3;
    int64 timestamp_utc_unix = 4;

When matching data is written to the queue for the event type example.UserLoginEvent, all of the handlers registered to that data type will run with serialized protocol buffer bytes as its standard input. If the handlers return a nonzero exit status, they are retried up to three times, exponentially backing off. Handlers need to deal with the fact they can be run out of order, and that multiple instances of them will can be running on physcially different servers in parallel. If a handler starts doing something and fails, it should back out any previously changed values using transactions or equivalent.

Consider an Olin handler equivalent to a Unix process.


Very frequently, I end up needing to write applications that basically end up waiting forever to make sure things get put in the right place and then the right code runs as a response. I then have to make sure these things get put in the right places and then that the right versions of things are running for each of the relevant services. This doesn't scale very well, not to mention is hard to secure. This leads to a lot of duplicate infrastructure over time and as things grow. Not to mention adding in tracing, metrics and log aggregation.

I would like to change this.

I would like to make a perscriptive environment kinda like Google Cloud Functions or AWS Lambda backed by a durable message queue and with handlers compiled to webassembly to ensure forward compatibility. As such, the ABI involved will be versioned, documented and tested. Multiple ABI's will eventually need to be maintained in parallel, so it might be good to get used to that early on.

You should not have to write ANY code but the bare minimum needed in order to perform your business logic. You don't need to care about distributed tracing. You don't need to care about logging.

I want this project to last decades. I want the binary modules any user of Olin would upload today to be still working, untouched, in 5 years, assuming its dependencies outside of the module still work.

Since this requires a stable ABI in the long run, I would like to propose the following unstable ABI as a particularly minimal starting point to work out the ideas at play, and see how little of a surface area we can expose while still allowing for useful programs to be created and run.


The dagger of light that renders your self-importance a decisive death

Dagger is the first ABI that will be used for interfacing with the outside world. This will be mostly for an initial spike out of the basic ideas to see what it's like while the rest of the plan is being stabilized and implemented. The core idea is that everything is a file, to the point that the file descriptor and file handle array are the only real bits of persistent state for the process. HTTP sessions, logging writers, TCP sockets, operating system files, cryptographic random readers, everything is done via filesystem system calls.

Consider this the first draft of Dagger, everything here is subject to change. This is going to be the experimental phase.

Consider Dagger at the level below libc in most Linux environements. Dagger is the kind of API that libc would be implemented on top of.


Dagger processes will use WebAssembly as a platform-independent virtual machine format. WebAssembly is used here due to the large number of implementations and compilers targeting it for the use in web programming. We can also benefit from the amazing work that has gone into the use of WebAssembly in front-end browser programming without having to need a browser!

Base Environment

When a dagger process is opened, the following files are open:

  • 0: standard input: the semantic "input" of the program.
  • 1: standard output: the standard output of the program.
  • 2: standard error: error output for the program.

File Handlers

In the open call (defined later), a file URL is specified instead of a file name. This allows for Dagger to natively offer programs using it quick access to common services like HTTP, logging or pretty much anything else.

I'm playing with the following handlers currently:

  • http and https (Write request as http/1.1 request and sync(), Read response as http/1.1 response and close()) http://ponyapi.apps.xeserv.us/newest

I'd like to add the following handlers in the future:

  • file - filesystem files on the host OS (dangerous!) file:///etc/hostname
  • tcp - TCP connections tcp://
  • tcp+tls - TCP connections with TLS tcp+tls://
  • meta - metadata about the runtime or the event meta://host/hostname, meta://event/created_at
  • project - writers of other event types for this project (more on this, again, in future posts) project://example.UserLoginEvent
  • rand - cryptographically secure random data good for use in crypto keys rand://
  • time - unix timestamp in a little-endian encoded int64 on every read() - time://utc

In the future, users should be able to define arbitrary other protocol handlers with custom webassembly modules. More information about this feature will be posted if we choose to do this.

Handler Function

Each Dagger module can only handle one data type. This is intentional. This forces users to make a separate handler for each type of data they want to handle. The handler function reads its input from standard input and then returns 0 if whatever it needs to do "worked" (for some definition of success). Each ABI, unfortunately, will have to have its own "main" semantics. For Dagger, these semantics are used:

  • The entrypoint is exposed func handle that takes no arguments and returns an int32.
  • The input message packet is on standard input implicitly.
  • Returning 0 from func handle will mark the event as a success, returning anything else will mark it as a failure and trigger an automatic retry.

In clang in C mode, you could define the entrypoint for a handler module like this:

// handle_nothing.c

#include <dagger.h>

__attribute__ ((visibility ("default")))
int handle() {
  // read standard input as necessary and handle it
  return 0; // success

System Calls

A system call is how computer programs interface with the outside world. When a Dagger program makes a system call, the amount of time the program spends waiting for that system call is collected and recorded based on what underlying resource took care of the call. This means, in theory, users of olin could alert on HTTP requests from one service to another taking longer amounts of time very trivially.

Future mechanisms will allow for introspection and checking the status of handlers, as well as arbitrarily killing handlers that get stuck in a weird way.

Dagger uses the following system calls:

  • open
  • close
  • read
  • write
  • sync

Each of the system calls will be documented with their C and WebAssembly Text format type/import definitions and a short bit of prose explaining them. A future blogpost will outline the implementation of Dagger's system calls and why the choices made in its design were made.


extern int open(const char *furl, int flags);
(func $open (import "dagger" "open") (param i32 i32) (result i32))

This opens a file with the given file URL and flags. The flags are only relevant for some backend schemes. Most of the time, the flags argument can be set to 0.


extern int close(int fd);
(func $close (import "dagger" "close") (param i32) (result i32))

Close closes a file and returns if it failed or not. If this call returns nonzero, you don't know what state the world is in. Panic.


extern int read(int fd, void *buf, int nbyte);
(func $read (import "dagger" "read") (param i32 i32 i32) (result i32))

Read attempts to read up to count bytes from file descriptor fd into the buffer starting at buf.


extern int write(int fd, void *buf, int nbyte);
(func $write (import "dagger" "write") (param i32 i32 i32) (result i32))

Write writes up to count bytes from the buffer starting at buf to the file referred to by the file descriptor fd.


extern int sync(int fd);
(func $sync (import "dagger" "sync") (param i32) (result i32))

This is for some backends to forcibly make async operations into sync operations. With the HTTP backend, for example, calling sync actually kicks off the dependent HTTP request.


Olin also includes support for running webassembly modules created by Go 1.11's webassembly support. It uses the wasmgo ABI package in order to do things. Right now this is incredibly basic, but should be extendable to more things in the future.

As an example:

// +build js,wasm ignore
// hello_world.go

package main

func main() {
	println("Hello, world!")

when compiled like this:

$ GOARCH=wasm GOOS=js go1.11 build -o hello_world.wasm hello_world.go

produces the following output when run with the testing shim:

=== RUN   TestWasmGo/github.com/Xe/olin/internal/abi/wasmgo.testHelloWorld
Hello, world!
--- PASS: TestWasmGo (1.66s)
    --- PASS: TestWasmGo/github.com/Xe/olin/internal/abi/wasmgo.testHelloWorld (1.66s)

Currently Go binaries cannot interface with the Dagger ABI. There is an issue open to track the solution to this.

Future posts will include more detail about using Go on top of Olin, including how support for Go's compiled webassembly modules was added to Olin.

Project Meta

To follow the project, check it on GitHub here. To talk about it on Slack, join the Go community Slack and join #olin.

Thank you for reading this post, I hope it wasn't too technical too fast, but there is a lot of base context required with this kind of technology. I will attempt to make things more detailed and clear in future posts as I come up with ways to explain this easier. Please consider this the 10,000 mile overview of a very long-term project that radically redesigns how software should be written.

Died to Save Me

Permalink - Posted on 2018-08-27 00:00

Died to Save Me

People often get confused
when I mention the fact that I
consider myself before I
came out a different person.

It's because that was a different person,
they died to save me.

The person I was did their
best given the circumstances
they were thrown into. It was
hard for them. I'm still working
off some of their baggage.

But, that different person,
even after all of the hardships
and triumphs they had been through,
they died to save me.

They were an extrovert pushed into
being an introvert by an uncaring
They were the pariah.
They were the person who got bullied.
They survived years of torment but
they died to save me.

I understand now why the Gods
prefer to use shaman-sickness to
help people realize their calling.
It is such an elegant teacher of
the Divine. So patient. So forgiving.

It's impossible to ignore everything
around you feeling incomprehensibly crazy,
because it is.
Our system is crazy.
Our system is incomprehensible.
We only "like" it because we have no
way to fathom anything else.

"Awakening" is probably one of the
least bad metaphors to describe the
feeling of just suddenly understanding
the barriers. Of seeing the formerly
invisible glass prison walls we apparently
live inside unknowingly.

It's not just an awakening though,
Not all of me made it through the process.
Not all of what constitutes yourself
(in your opinion) is actually a True
part of you. Not all your thoughts,
memories, ideas, dreams, wishes
and even fears or anxieties are
truly yours.

Sometimes there's that part that
really does have to die to save you.
The part that was once a shining beacon
of hope that has now fallen beyond disrepair.
A thread of connection to a past that
can never come to pass again.
Memories or experiences of pain,
trauma. It can die to save you too.

You don't have to carry
the mountains you come across,
you can just climb them.

When it dies, it is gone, but:
you can sleep easier knowing
they died to save you.

Sorting Time

Permalink - Posted on 2018-08-26 00:00

Sorting Time

Computers have a very interesting relationship with time. Time is how we keep track of many things, but mainly we use time to keep track of how far along in a day cycle we are. These daily sunrise/sunset cycles take about 24 hours on average, and the periodicity of them runs just about everything. Computers use time to keep track of just about everything on the board, usually measured in tiny fractions of seconds. (The common rating of gigahertz for computer processors actually measures how much time it takes for the processor to execute one instruction. A processor with a clock of 3.4 gigahertz means that the processor executes, best case, 3.4 billion instructions per second.) Computer programmers have several popular methods of storing time with computers, the number of time intervals since a fixed date (usually the number of seconds since January 1st 1970) or as a human-readable string. These intervals are normally ever only added to and read from, almost never updated by human hands after being initially set by the network time service.

Pulling things back into the real world, let's consider storing time in Javascript. Let's say we're using Javascript in the browser and have a date object like so:

var date = new DateTime();

Say this is for Thursday, August 23rd 2018 at midnight UTC. If we turn it into a string using the toString method:

date.toString(); // -> "Thu Aug 23 2018 00:00:00 GMT+0000 (UTC)"

We get the date and time as a string. The application in question uses a data store that has an interesting problem: it will automatically coerce things to a string type without alerting developers.

typeof date === 'object' // -> true

We expect date to be a normal object after we add it to the store. Let's add it to the store and see what happens to it.

const record = store.createRecord("widget", { createdAt: date });
typeof record.get("createdAt"); // -> string

Oh boy. It's suddenly a string now. That's not good.

console.log(record.get("createdAt")); // -> "Thu Aug 23 2018 00:00:00 GMT+0000 (UTC)"

This works all fine and well, but sometimes a few lists of things can get bizarrely out of order in the UI. Things created or updated right at a midnight UTC barrier would sometimes cause lists of things to show the newest elements at the bottom of the list. This confused us, sorting data really fitting it into the order it belongs to, and time doesn't usually advance out of order; so something being sorted wrongly by time is very intuitively confusing.

Consider a function like this at the given date above:

function minutesAgo(minutes) {
  return moment().subtract(minutes, "minute").toDate();

const date1 = minutesAgo(0);
const date2 = minutesAgo(1);
const date3 = minutesAgo(30);

If we were to sort date1, date2 and date3 with the current time being Thursday August 23 2018 at midnight UTC, it would make sense for the objects to sort ascendingly in the following order: date3, date2, date1. Not as strings however. As strings:

date1.toString(); // -> "Thu Aug 23 2018 00:00:00 GMT+0000 (UTC)"
date2.toString(); // -> "Wed Aug 22 2018 23:59:00 GMT+0000 (UTC)"
date3.toString(); // -> "Wed Aug 22 2018 23:30:00 GMT+0000 (UTC)"

Since T comes before W in the alphabet, the actual sort order is: date1, date3, date2. This causes an assertion failure in both humans and machines. This caused test failures, but only at about midnight UTC on Mondays, Thursdays, Fridays and Saturdays at 00:00 UTC through 00:30 UTC. How did we fix this? Turns out the time data from the API we get this information from is already properly sortable; this is because the API uses IS08601 timestamps.

const thursday = '2018-08-23T00:00:00.000Z';
const wednesday = '2018-08-22T23:30:00.000Z';

thursday > wednesday // true

This time data is also easy to convert back into a native DateTime object should we need it. The fix was to only ever store time as strings unless you need to actively do something with them, then you coerce them back into a native DateTime like it never happened. This is not an ideal fix, but given the larger complexity of the problem, it's what we're gonna have to live with for the time being. This solution at the very least seems to be less bad than the original problem, as things get sorted properly in the UI now. Yay computers!

This is an adaptation of a pull request made by a coworker to work around an annoying to track down bug that caused flaky tests. It's not my story, but it just goes to show how many moving parts truly are at play with computers. Even when you think you have all of the moving parts kept track of, complicated systems interface in unpredictable ways. Increasingly complicated systems interface in increasingly unpredictable ways too, which makes finding problems like these more of a hunt.

Happy hunting and be well to you all.


Permalink - Posted on 2018-08-19 00:00


Death is a very misunderstood card in Tarot, but not for the reasons you'd think. Societally, many people think that this life is the only shot at existence they get. Afterwards, there is nothing. Nonexistence. Oblivion. This makes death a very touchy subject for a lot of people, so much so it forms a social taboo and an unhealthy relationship with death. People start seeing death as something they need to fight back and hold away by removing what makes themselves human, just to hold off what they believe is their obliteration.

Tarot does not see death in this way. Death, the skeleton knight wearing armor, does not see color, race or creed, thus he is depicted as a skeleton. He is riding towards a child and another younger person. The sun is rising in the distance, but even it cannot stop Death. Nor can royalty, as shown by the king under him, dead.

Death, however, does not actually refer to the act of a physical body physically dying. Death is a change that cannot be reverted. The consequences of this change can and will affect what comes next, however.

Consider the very deep sea, so far down even light can't penetrate that deep. There's an ecosystem of life down there, but it is so starved for resources and light that evolution has skimped out on things like skin pigmentation. Sometimes a mighty whale will die and its body will fall to the sea floor down there. The creatures will feast for a month or more. The whale died, yet its change fosters an entire ecosystem. This card signifies much of the same. Death signifies the idea of a change from the old, where the whale was alive, to the new, where the whale's body feeds an entire community.

Death is a signifier that change is coming or needed, and it won't care if you're ready for it or not. So, embrace it with open arms. Don't fight what is inevitable. All good things must come to an end for them to be good to begin with.

Death is a part of life like any other; this is why it is in the Fool's Journey, or Major Arcana of the Tarot. To eschew death is, in essence, to throw out life itself. Living in fear of death turns life from a glorious dance of cocreation with the universe into a fearful existence of scraping by on the margins. It makes life an anxious scampering from measly scrap of food to measly scrap of food without any time to focus on the higher order of things. It makes you accept fear, depression, anxiety and regret instead of just being able to live here, in the moment, and make the best of what you have right now. If only because right now you still have it.

When Then Zen: Anapana

Permalink - Posted on 2018-08-15 00:00

When Then Zen: Anapana


Anapanasati (Pali: Sanskirt: anapanasmrti, English: mindfulness of breathing) is a form of meditation originally taught by Gautama Buddha in several places, mainly the Anapanasati Sutta (English: passages). Anapana is practiced globally by meditators of all skill levels.

Simply put, anapana is the act of focusing on the sensations of breath in the body's nasal cavity and nostrils. Some practices will focus on the sensations in the belly instead (this is why there's fat buddha statues), but personally I find that the sensations of breath in the nostrils are a lot easier to focus on.

The method presented in this article is based on the method taught in The Art Of Living by William Hart and S.N. Goenka. If you want a copy of this book you can get one here: http://www.cicp.org.kh/userfiles/file/Publications/Art%20of%20Living%20in%20English.pdf. Please do keep in mind that this book definitely leans towards the Buddhist lens and as it is presented the teaching methods really benefit from it. Also keep in mind that this PDF prevents copying and duplication.

Note: "the body" means the sack of meat and bone that you are currently living inside. For the purposes of explanation of this technique, please consider what makes you yourself separate from the body you live in.

This article is a more verbose version of the correlating feature from when-then-zen.

Background Assumptions of Reader

Given no assumption about meditation background
And a willingness to learn
And no significant problems with breathing through the body's nose
And the body is seated or laying down comfortably
And no music is playing

Given no assumption about meditation background

The When Then Zen project aims to describe the finer points of meditative concepts in plain English. As such, we start assuming just about nothing and build fractally on top of concepts derived from common or plain English usage of the terms. Some of these techniques may be easier for people with a more intensive meditative background, but try things and see what works best for you. Meditation in general works a lot better when you have a curious and playful attitude about figuring things out.

I'm not perfect. I don't know what will work best for you. A lot of this is documenting both my practice and what parts of what books helped me "get it". If this works for you, please let me know. If this doesn't work for you, please let me know. I will use this information for making direct improvements to these documents.

As for your practice, twist the rules into circles and scrape out the parts that don't work if it helps you. Find out how to integrate it into your life in the best manner and go with it.

For now, we start from square one.

And a willingness to learn

At some level, you are going to need to be willing to actually walk the path. This can be scary, but that's okay as long as you're willing to acknowledge it and not let it control you.

If you run into some dark stuff doing this, please consult a therapist as usual. Just know that you don't walk this path alone, even when it feels like you must be.

And no significant problems with breathing through the body's nose

Given that we are going to be mainly focusing on the nasal reactions to breathing, that path being obstructed is not gonna result in a very good time. If this is obstructed for you, attempt to clear it up, or just use the mouth, or a different technique entirely. It's okay for anapana to not always work. It's not a universal hammer.

And the body is seated or laying down comfortably

Some people will assert that the correct pose or posture is critical for this, but it's ultimately only as important as the meditator believes it is. Some people have gotten the association somehow that the meditation posture helps with things. Ultimately, it's suggested to start meditation sitting upright or in a chair as it can be easier for you to fall asleep while doing meditative practice for the first few times. This is a side effect of the brain not being used to the alternative state of consciousness, so it falls back on the "default" action; this puts the body, and you, to sleep.

And no music is playing

You should break this rule as soon as possible to know if it's best to ignore it. Some people find music helps; I find it can be a distraction depending on the music track in question. Some meditation sessions will need background music and some won't. That's okay.

Scenario: Mindfulness of Breathing

As a meditator
In order to be mindful of the body's breath
When I inhale or exhale through the body's nose
Then I focus on the sensations of breath
Then I focus on the feelings of breath through the nasal cavity
Then I focus on the feelings of breath interacting with the nostrils
Then I repeat until done

As a meditator

This is for you to help understand a process you do internally, to yourself.

In order to be mindful of the body's breath

It is useful in the practice to state the goal of the session when leading into it. You can use something like "I am doing this mindfulness of breathing for the benefit of myself" or replace it with any other affirmation as you see fit.

When I inhale or exhale through the body's nose

You can use the mouth for this. Doing it all via the mouth requires the mouth to stay open (which can result in dry mouth) or constantly move (which some people find makes it harder to get into flow). Nasal breaths allow for you to sit there motionless yet still continue breathing like nothing happened. If this doesn't work for you, breathe through your mouth.

Then I focus on the sensations of breath

There are a lot of very subtle sensations related to breathing that people don't take the time to truly appreciate or understand. These are mostly fleeting sensations, thankfully, so you really have to feel into them, listen for them or whatever satisfies your explanation craving.

Listen in to the feeling of the little part of cartilage between nostrils whistling slightly as you breathe all the way in at a constant rate over three seconds. It's a very very subtle sound, but once you find it you know it.

Then I focus on the feelings of breath through the nasal cavity

The sound of breath echoes slightly though the nasal cavity during all phases of it that have air moving. Try and see if you can feel these echoes separate from the whistling of the cartilage; bonus points if you can do both at the same time. Feel the air as it passes parts of the nasal cavity as your sinuses gently warm it up.

Then I focus on the feelings of breath interacting with the nostrils

The nostrils act as a curious kind of rate limiter for how much we can breathe in and out at once. Breathe in harder and they contract. Breathe out harder and they expand. With some noticing, you can easily feel almost the exact angle at which your nostrils are bent due to your breathing, even though you can't see them directly due to the fact they are out of focus of our line of sight.

Isn't it fascinating how many little sensations of the body exist that we continuously ignore?

Scenario: Attention Drifts Away From Mindfulness of Breathing

As a meditator
In order to bring my attention back to the sensations of breathing
Given I am currently mindful of the body's breath
When my attention drifts away from the sensations of breathing
Then I bring my attention back to the sensations of breathing

In order to bring my attention back to the sensations of breathing

When this happens, it is going to feel very tempting to just give up and quit. This is normal. Fear makes you worry you're doing it wrong, so out of respect of the skill you may want to just "not try until later".

Don't. This is a doubt that means something has been happening. Doubt is a sick kind of indicator that something is going on at a low level that would cause the vague feelings of doubt to surface. When it's related to meditative topics, that usually means you're on the right track. This is why you should try and break through that doubt even harder if you can. Sometimes you can't, and that's okay too.

Given I am currently mindful of the body's breath

This is your usual scenario during the mindfulness practice. You will likely come to deeply appreciate it.

When my attention drifts away from the sensations of breathing

One of the biggest problems I have had personally is knowing when I have strayed from the path of the meditation, it was hard for a time to keep myself in the deep trance of meditation and keeping detached awareness of my thoughts. My thoughts are very active a lot of the time. There are a lot of distractions, yet it's hard to maintain focus on them sometimes.

One of the biggest changes I have made that has helped this has been to have a dedicated "meditation spot". As much as possible, I try to do meditative work while in that spot instead of my main office or bed. This solidifies the habit, and grows the association between the spot and meditative states.

Then I bring my attention back to the sensations of breathing

This, right here, is the true core of this exercise. The sensations of breathing are really just something to distract yourself with. It's a fairly calming thing anyways, but at some level it's really just a distraction. It's a fairly predictable set of outputs and inputs. Some sessions will feel brand new, some will feel like old news.

Meditation is sitting there only letting yourself think if you truly let yourself. Mindfulness is putting yourself back on track, into alignment, etc., over and over until it happens on its own. If you get distracted once every 30 seconds for a 5 minute session, you will have brought yourself back to focus ten times. Each time you bring yourself back to focus is a joy to feel at some level.

Scenario: mindfulness of unconscious breathing

As a meditator
In order to practice anapana without breathing manually
When I stop breathing manually
Then the body will start breathing for me after a moment or two
Then I continue mindfulness of the sensations of breathing without controlling the breath

In order to practice anapana without breathing manually

While observing the body's unconscious breath, you start entering into what meditation people call the "observer stance". It is this sort of neutral feeling where things are just happening, and you just see what happens. There is usually a feeling of peacefulness or equanimity for me, but usually when I start doing this I radiate feelings of compassion, understanding and valor.

Keep in mind that doing this may have some interesting reactions, just let them pass like all the others.

When I stop breathing manually

You gotta literally just cut off breath. It needs to stop. You have to literally stop breathing and refuse to until the body takes over and yanks the controls away from you.

Then the body will start breathing for me after a moment or two

There's a definite shift when the body takes over. It will sharply inhale, hold for a moment and then calmly exhale. Then it will breathe very quietly only as needed.

Then I continue mindfulness of the sensations of breathing without controlling the breath

The body does not breathe very intensely. It will breathe calmly and slowly, unless another breathing style is mandatory. The insides of the nostrils moving from the air pressure is a still a noticeable sensation of breathing while the body is doing it near silently, so you can hang onto that.

Scenario Outline: meditation session

As a meditator
In order to meditate for <time>
Given a timer of some kind is open
And the time is set for <time>
When I start the timer
Then I clear my head of idle thoughts
Then I start drifting my attention towards the sensations of breathing
Then I become mindful of the sensations of breathing
Then I continue for a moment or two
Then I shift into mindfulness of unconscious breathing

  | time         |
  | five minutes |

In order to meditate for

The time is intentionally left as a variable so you can decide what session time length to use. If you need help deciding how long to pick, you can always try tapering upwards over the course of a month. I find that tapering upwards helps A LOT.

Given a timer of some kind is open

One of the old-fashioned kitchen timers will do even.

And the time is set for

You need to know how to use your timer of choice for this, or someone can do it for you.

When I start the timer

Just start it and don't focus on the things you're already thinking about. You're allowed to leave the world behind for the duration of the session.

Then I clear my head of idle thoughts

If you're having trouble doing this, it may be helpful to figure out why those thoughts are lingering. Eventually, addressing the root cause helps a lot.

Then I start drifting my attention towards the sensations of breathing

Punt on this if it doesn't help you. I find it helps me to drift into focusing on the breath instead of starting laser-focused on it.

Then I become mindful of the sensations of breathing

Focus around the nostrils if you lose your "grip" on the feelings.

Then I continue for a moment or two

You'll know how much time is right by feel. Please study this educational video for detail on the technique.

Then I shift into mindfulness of unconscious breathing

The body is naturally able to breathe for you. You don't need to manually breathe during meditation. Not having to manually breathe means that your attention can focus on passively, neutrally observing the sensations of breath.

Further Reading

This is all material that I have found useful while running into "problems" (there aren't actually any good or bad things, only labels, but that's a topic for another day) while learning or teaching anapana meditation or the concepts of it. All of these articles have been linked in the topic, save three I want to talk about specially.


This is an old Zen tale. The trick is that the farmer doesn't have any emotional attachment to the things that are happening to him, so he is neither labeling things happy nor labeling things sad. He is not stopped by his emotions.

Ebbs and Flows

This touches into the true "point" of meditation. The point isn't to just breathe. The point is to focus on the breathing so much that everything else stills to make room. Then what happens, does. The Alan Watts lectures are fascinating stuff. Please do give at least one a watch. You'll know which one is the right one for you.

Natural Selection

This is excerpted from almost the beginning of the book Why Buddhism is True. Robert Wright really just hit the nail on the head when describing the level of craziness that simply exists. Natural selection means that, effectively, whatever causes populations to be able to breed and survive the most means the traits of those doing the most breeding become more common. Please read the entire book.

Narrative of Sickness

Permalink - Posted on 2018-08-13 00:00

Narrative of Sickness

With addiction, as with many other things, there's a tendency for the mind to label the situation and create a big story. A common phrase I see is "I want to get better", as if you're sick. You're not sick. You may identify yourself as an "addict", or you might feel fear because you are afraid you'll fail, or that you'll experience cravings, etc. but reminding yourself that you need to get better is perpetuating the narrative of sickness.

These are all stories, they have no bearing on reality. You can just embrace the cravings. Embrace the withdrawal. They are feelings, and they can be not acted upon, through mindfulness of them. Be mindful of your thoughts, but don't pay heed to them. Don't get caught up. And if you feel like your are getting caught up, realize that that's another feeling as well.

Such things don't last forever. Existence is change, inherently, inevitably. Embracing life is embracing change. Things in this world will change without warning. Things we consider safe and stable today will vanish tomorrow. Accept this as a fact of life.

To love is to gain and to lose in equal measure. To lose is to love in turn. Every journey upwards has its regressions downwards.

It may sound like a subtle distinction between getting better from addiction, or from sickness, and just changing, but it’s really all the difference. A plant is not sick just because it later grows into a bigger tree. Change is just simply what happens, and it can be recognized and embraced in order to fully, progressively align the self with whatever intent or goal.

Fully embracing all that you are is the best way to bring this about, for you can be present to what happens and help it change through your intent, veer it towards the desired destination.

Olin Logo V1

Permalink - Posted on 2018-08-12 00:00

Created with Procreate on iPadOS using an iPad Pro and an Apple Pencil.

For Olin.


Permalink - Posted on 2018-07-24 00:00


I must not fear.
Fear is the mind-killer.
Fear is the little-death that brings total obliteration.
I will face my fear.
I will permit it to pass over me and through me.
And when it has gone past I will turn the inner eye to see its path.
Where the fear has gone there will be nothing.
Only I will remain.

Bene Gesserit Litany Against Fear - From Frank Herbert's Dune Book Series

Fear sucks. Fear is an emotion that I’ve spent a lot of time encountering and it has spent a lot of time paralyzing me. Fear is something that everyone faces at some level. Personally, I’ve been dealing a lot with the fear of being outcast for being Other.

What is Other? Other are the people who don't want to "fit in". Other are the people who go against the grain of society. They don't care about looking different or crazy. Other are the people who see reality for what it really is and decide that they can no longer serve to maintain it; then take steps to reshape it.

But why do we have this fear emotion? Fear is almost the base instinct of survival. Fear bypasses the higher centers in order to squeeze decisions through that prevent something deadly from happening. Fear is a paralyzing emotion. Fear is something that stops you in your tracks. Fear is preventative.

Except that's not completely true. We see that we have moved away from the need for survival on a constant daily basis, yet our sense of fear is still tuned for that. Fear pervades almost everyone's daily lives at some level, down to how people post things on social media. We all have these little nagging fears that add up; the intrusive negative thoughts; some have the phobias, the anxieties, the panic attacks. One fear in particular, that I call the separation/isolation/displacement fear, is a fear with many social repercussions. It's a fear that urges us to keep continuity of self, to avoid "standing out", to keep discussion away from particular topics (like the spiritual, for many). It keeps us wary of what others could do to us. It makes us feel small in a world that is, at best, neutral in our regards.

If there was ever something that gave us an advantage as a species about fear, it's clear to see it's currently corrupting the lives of many innocent people for no seemingly good reason. There are alternatives to fear with regards to handling one's inner and outer lives, and they are out there, but fear keeps making itself known and dominating the perceptions of the collective. Sometimes the alternatives to fear, themselves, are feared even stronger.

So how to make sense of this?

Sometimes it helps to see things from a fresh point of view, and sometimes stories is what manage to accomplish that best. They are ways to explore new situations in a way that doesn't strain disbelief as severely, so that new perspectives can be collected from faraway thoughtscapes.

A myth is a story that helps explain something beyond the mere scenes presented, in the context of it using the divine as actors. To help explain how these fears can be difficult to overcome, or even put a label to, I've found a story that will seem fantastical to many; however, the point of a story is not to be seen as truth, but merely to be heard, and to be collected, and to enrich the listener with its metaphors.

In Sumerian mythology, Anu was their Zeus, their sun and creator god. Their mighty god of justice that would one day fly down on a cloud and deliver humanity to righteousness. Sumerians believe their sun god Anu created their civilization as a gift to them. In some myths, the creation goes quite deeper, and darker, than that.

Imagine for a moment, an infinite universe of light and sound, of primordial vibrations. Vibrations that permeate the whole of existence, and create different experiences with their patterns of interference. The holographic universe. In such a place, everything is resonance of waves, everything all-encompassing, everything infinite, everything eternal.

And living in such a place are infinite beings, without beginning of end, not bound by space or time, as boundless as the waves they experience. Sovereign beings of grand destinies. And those beings colonized the Universe, explored its facets, its resonances, its properties, its behaviours.

Among such beings, so equal in their infinitude, some of them desired to experience creation in a new way; no longer just as dominion over the Universe, but over other beings in it as well. The desire to be looked up to, to be feared, to be revered. The new concept of godhood took shape.

To achieve this, this group of beings asked another civilization for help; they were all beings of vast reaches and etheric nature, but they claimed to need the gold hidden within the surface of a densifying planet called Earth, which they were not attuned to, and unable to fully interact with in their current forms. To do this, they would need physical bodies, meat uniforms that the civilization's inhabitants would don and power up, so that they could interact with the ground, and the mineral.

For convenience of telling, we'll call the group of deceivers the Anunnaki, and the deceived civilization the Atlanteans.

The Anunnaki had carefully devised this meat uniform, the newly devised human body, planned about it for an exceedingly long time, in order to completely entrap the Atlanteans. The Atlanteans themselves accepted the task because they had no conception that infinite beings could ever be limited or subjugated. It had never happened before. And in the donning of the uniforms, the trap was sprung.

Those uniforms, the human bodies, constricted the Atlanteans' attention to only what the body could perceive with its senses; it urged them to survive and to work; it distracted them from all other activities; it rendered them slaves to the mining. Every part of the construct was forcing them to forget who they were, and instead making them focus on their identity as human bodies. And when such bodies would expire, a part of them would still remain to keep the beings trapped, and they would be put in a space of holding in the astral realms, for them to be assigned a new body to continue mining.

Through the human body, the Atlanteans were subjected to a carefully constructed illusion, fed to them by the senses, through the mind, that left them unable to perceive, to remember, anything else but the illusion.

With time, many shortcomings of the primitive human bodies were corrected; from being clones that needed to be produced by the Anunnaki, they were given capability to reproduce; more independent thought and awareness was allowed, and ability to self- and group-organize; they were starting to be allowed to feel emotions; more and more, their world was being expanded, but with it, the structure of the mind system that contained their perception to the realms of the physical and astral, and prevented them to gain awareness of what was outside this narrow band of illusory perception, was developed and expanded in turn. Layers upon layers were put between those beings and the realization of their true, infinite selves.

The system of death and reincarnation was automated so beings would be recycled in a systematic manner into their next lives. The concept of God was introduced to them, so that they would fear punishment and retribution from something that they perceived as greater than them; and Anu, leader of the Anunnaki, manifested to the people of Earth as a supreme being of infinite power, so they could adore him, and so they could fear him. Language developed, a system of communication mired in separation, in division of concepts and the rigidness of categorization, so that they would not be able to speak to one another of their own infinity, of their unity with the whole. Fears of all kinds were injected into the mind system: fear of death, fear of nothingness, fear of punishment; but above all, fear of separation: the fear of not having the vital connection that makes us One, and that allows us to know and understand one another innately. The fear of not being understood, of not being accepted, of not being received, of not being helped, of not being supported. The fear that had kept them doubting one another, and kept them from uniting their efforts.

The Anunnaki took away the ability for the Atlanteans to even know they were Atlanteans. They took away the ability for them to even be able to get close to finding out. Just so Anu could be an absolute ruler. The first to ever have done this previously impossible task.

Myths were disseminated to keep people awash with fear of punishment, and mired in the guilt of their original sin, and distrusting, doubting of the nature of their own selves, and of their fellow neighbors'. Hierarchies were set up, so people would focus on controlling one another, instead of working together to liberate all. Not needing any more, the Anunnaki allowed the focus on gold to become greed, so that people would put desire for a mere metal above the needs of their fellow beings.

As the Anunnaki departed from the densifying planet, which was not allowing them to manifest as etheric beings anymore, tracks were set up in the collective unconscious so that while they were away, the people's societies would evolve through predefined paths, and would eventually set up for the glorious return of God, the Apocalypse.

Every single possible obstacle had been put in place so that the Atlanteans would never realize who they had been, and who they always were: infinite, sovereign beings, connected to the whole of the Universe.

Except this would not be allowed indefinitely. Other infinite beings became aware of such deception taking place, and realized it was being exported into other planets, and such an enslavement paradigm, based in fear and separation, was a degenerative, infecting force that had to be stopped. So the Anunnaki were prevented return, and in order to make it so that infinite beings would be able to never fall prey to such deceptions again, the seeds of destruction were planted inside the programming system of the human mind. Cracks were introduced to the barriers that kept people under deception, so that they could peer through them, and see the other side beyond the walls of the labyrinth. Pathways were provided so that people could be lead to the discovery of their true selves, and their eventual liberation from the deception, and self-realization as infinite beings, once again. The very liberation that the programming was designed to prevent through all means conceivable.

And that leaves us to the present time.

Sometimes the Other manages to find these cracks and go through them into the other side. They go to this other side and see a faint reflection of what is really out there. The world outside this world. An even bigger Infinity. They have trouble describing it. They have intense fear even thinking about it. They're afraid to acknowledge it to their peers. They want to help people but they are utterly terrified of their reactions.

They're terrified that someone might hurt them if they say anything about their experiences. They're worried someone might try to hospitalize them for their beliefs. They get it into their head that they aren't able to function in society, so they don't. They don't want to mine the gold. They don't want to serve the economy of the few. They don't want to maintain the hierarchies. They want to detach themselves from the systems that they feel are suppressing them. They want to help people save themselves from believing that their own finite existence is all that there is, but that fear utterly paralyzes them. They have trouble finding the words. They end up misphrasing things in ways that make the problem worse. Some lash out. Some get labels put on them.

These Other just want to be accepted like everyone else. They want to help their communities. They want to use their abilities to read between the lines, into the bigger picture; to do good things; but they are, ultimately, afraid to. Their fear of separation paralyzes them. People don't like them talking about spiritual topics. These Other just want to be accepted and use their experience to lovingly help guide and shape reality into what they think is a better place. Even as they struggle through the fear.

Who's really the crazy one? The one who fear controls, or the one who doesn't let fear control them?

How does the Other live with fear surrounding their actions, and doubt plaguing their decisions?

They can have people they can trust. They can have people who can help them deal with their doubts. They can have the strength of their determination to find the truth, and the resolve to put an end to the suffering of their fellow beings. But they still fear, and they still doubt.

The real difference is that they see fear as something imposed on them, not as a voice that they must always answer to, and not as something they need to wait hand and foot for, every day of their existence. In a way, they have been fed up with fear, getting tired of it and casting it out like the nuisance they now see it to be. Even if the fear was added there because of some programming of their mind, something that happened to them to make them afraid, even if they don't know where it comes from or why, they still acknowledge it, and reject it, and move on like the emotion never happened. They keep fighting for understanding, and for community. They refuse to give fear dominion in their lives, even if they sometimes fail at it.

It's such an easy and obvious thing to do that we could all do it, if we weren't so afraid of it.

I leave you with this quote from a book named Quantusum:

Uncle suddenly scooped down with his hand and brought up a closed hand. He then
brought it to a glass box that stood on a pedestal I hadn’t noticed. He slid one
of the box’s glass planes open and placed an insect inside. It looked like a
grasshopper. “This creature lives its entire life in these fields without
limitation. I just ended that.”

I watched as the grasshopper jumped inside the glass box hitting against the top
and some of the sides. The grasshopper stopped as if he was stunned by the new
circumstance of his environment.

“To the grasshopper,” Uncle said, “all is well. He is alive after all. He sees
his normal environment all around him. He can’t see the glass. If I keep him
in here for a few days he will stop his jumping and become acclimated to the
dimensions of his new home. All he needs is food and water, and he can survive.”

“So you’re saying these people are acclimated to simply survive?”

Uncle slide one of the side panels of the glass box open. “If you were a
grasshopper, what would you do?”

“I would jump through the open panel.”

“But how would you know it was open? It’s perfectly clear glass.”

I thought about it for a moment. “I’d jump in every direction… I’d experiment.”

Uncle took a stick and pointed it at the grasshopper through the open side
panel, and the grasshopper jumped into the opposite wall, hitting his head and
falling to his side. “Do you see that I offered him an exit and he fled? He
could’ve climbed on the stick, and I would have freed him.”

“Yes, but he doesn’t know that.”


Uncle opened another side panel. “What you said is right. You experiment. You
try different ways to climb the mountain of consciousness. You don’t settle on
one way… one method… one teacher. If you devote your entire life to the worship
of one thing, what if you find out when you take your last breath that the one
thing was not real.

“You find that you lived inside a cage all your life. You never tried to jump
out by experimenting, by testing the walls. The people who never bother to climb
this mountain are inside a cage, and they don’t know it. Fear is the glass wall.
Wakan Tanka comes and opens one of the glass panels, perhaps offers a stick for
them to climb out, but they jump away, going further inside their soul-draining

Uncle brought the stick out again and lightly jabbed it in the direction of the
grasshopper, who hopped through the open side panel, and was instantly lost in
the thick underbrush that surrounded us.

Uncle turned his eyes to me. “Are you ready to do the same?”


Permalink - Posted on 2018-07-20 00:00


A lot of ground has been trodden about Mindfulness and its many facets, but there is one topic I have seen not enough people elaborate on, especially in a satisfactory manner, and that topic is gratitude.

The act of expressing gratitude is a behaviour that grounds you in observation of the present moment; of the present you, and of what matters to that present you. It can help you understand the current, immediate moment, the Now, by pushing you to examine parts of it that you might have taken for granted. Or parts that hide behind the other parts. It is a tool of positive exploration, that empowers the user to iteratively discern the heart of matters, of things, guided by the unerring principle of genuine appreciation of what counts.

You can get to see both sides of a scenario this way. You can see the people who did work behind the scenes and the remnants of the people who created the ground on which you stand. You can see the world unravel before you, and reveal its whispered details to you, piece by piece, as you put old things under a new and empowering lens. All there is left to do then is to acknowledge it.

In this moment, around you, exist quite literally the results of the collected life efforts of every single creature that lived up to this point, which is the basis for what every single creature from this point onwards will now create. Hundreds of animals died years and years ago to power the cars that take millions of people to hundreds of thousands of buildings to flip trillions of switches that make the rest of the social and economic engine of the entire world function. We get to experience this lifestyle from the results of an untold number of hours put into creating all of the technology stacks that our careers are built on; and then, there are some who just start arguing about semantics involving which pattern of bytes is better. Sometimes the right perspective helps.

A core principle of appreciation is that there is value in everything. There is an observable beauty in all things, in the way they exist, in the way they relate, and how they give meaning and purpose to each other. If you don’t see it immediately, sometimes it helps to rethink the angle.
Consider a blackberry that is one week before ripeness. It is a sweet fruit. It is delicious. It has a slightly bitter aftertaste. You might compare to the fully ripe blackberry, and lament the bitterness, or you could come to appreciate the blend of flavors just as it is. You might appreciate the novelty in taste compared to the usual ripe blackberries. You might even like it better.

Nature might reveal some imperfections, some asymmetries, some flaws, but the imperfections can make it all the more beautiful and intriguing.
Four leaf clovers are genetic mutations.
The gnarliness of an olive tree makes it distinctive, and symbolic.
Sometimes the imperfections literally make the style, as in wabi sabi.

Imagine the drawings of a young child. They are not always going to be aesthetically pleasing to us, but that does not matter to them. They draw to perpetuate the beauty they observe; they draw so we can capture what they see, so that we become able to reflect back on it.
Following that sentiment, we might want them to improve; we push them so they can create genuine works of art. But not every one of them is going to end up an artist. Pushing them to achieve results might end up having them compare themselves to older, better people than them, and discouraging them from furthering their own practice.
Instead, sometimes they draw just because they have half an hour to kill, or just really want to draw, and that's okay. We can acknowledge that sometimes self-expression doesn't have to shoot for excellence, or for pleasing others; and instead, it can be just a simple pasttime, a venting channel, or a way to create a personal universe where they can express themselves better.

Or, consider how we don’t know what works of ours will create the most impact. Sometimes our “worst” creations end up having the most lasting and widespread effect on others.
Sometimes what looks like a mistake might just end up accidentally resulting in a work of genius. The sheer novelty of the straying path might lead us towards new revelations and new beauty.

Those are perspectives to help realize how beauty and meaning can sometimes be hidden from plain view, but they are nonetheless, observable, through the simple, continued intent of appreciating life simply for what it is.

Gratitude can be expressed by just summarizing the reasons you are able to do what you do currently; what supports you in your pursuits. You can be grateful for your coworkers and being able to collaborate with them. You can be grateful for the engineering that has gone into your bed, in order to make it a comfortable place to sleep. You can be grateful for the team responsible for the metrics that are collected when you made a HTTP request to this webpage. You can be grateful for the sun warming our planet. You can be grateful for quite anything and everything that crosses your path.

Two other examples of how to include an outpouring of gratitude into your daily processes are:

  • By creating and maintaining a gratitude sanctuary;
  • And, by incorporating gratitude into meetings and family times such as shared meals.

A gratitude sanctuary can be as easy to create as a slack channel named #gratitude then followed by you listing something you are grateful for, and inviting people to join in. Once the idea starts, people will sustain it mostly on their own without having to do much moderation or filtering. People will naturally see things go there and put their own right next to the other expressions of gratitude.

Fitting gratitude into other parts of your daily life can be as easy as blatantly stapling it to the side of other topics or actions. If you are leading a meeting, start by going around and asking people to say something they are grateful for, related to the topic at hand or otherwise. When you are eating with your family, start the conversation by going around the table and having everyone say something, anything they are grateful for.

An unspoken rule of this behaviour that is fun to act on is to just let the gratitude take you where it will. Operating with appreciation is moving under a new intelligence, which can lead you in very novel and unexpected places; sometimes places of deep illumination, or of profound liberation.

Sometimes looking for gratitude will uncover things that are very unbalanced. Sometimes you will find people who are not in good places. It happens. If you can’t help them and don’t know someone who can, you could empower them to find someone who can help them better themselves. If you find yourself in doubt on these matters, you can ask someone you trust, or a therapist.

Gratitude can also be applied by acting on the clear feelings of the moment, and taking action to better that person’s life. Every time you see someone at work do something you feel grateful for, message them and ask them if there’s anything you can do to help make their job better or improve their day. If it’s reasonable and you have the time, give it to them. They will appreciate it. Sometimes your actions of gratitude can even be the catalyst to lasting change.

Support your targets of gratitude by being there for them when they need it, but backing off when they don’t. If you’re not sure, ask them. Adults can tell each other yes or no.
This should be like setting out a bowl of milk for a cat. The cat might start drinking on its own volition, but it shouldn’t be forced to drink. The cat might not even be thirsty. Cats can be mysterious creatures, and their needs are their own, so what one can do is offer the support, for them to take.

Cocreation is another essential part in understanding how to express gratitude. Cocreation is the acceptance and use of feedback in your actions, shaping your ongoing behaviour through adaptation to the observed middle results. It is the acknowledging that every act towards another, or with another, involves a partnership of some sort. It is the opening to the correlation, or co-relation, of your acts and others'.

In a way, cocreation can be seen a continuous dance between the universe and the individual. Each side influences, creates the other. The universe is constantly created by the actions of the people in it. The people are constantly created by the actions of the universe they inhabit. If you create in it someone that expresses gratitude and can help others do it in turn, you create a universe with more creators of gratitude, and that can create even more. Expressing gratitude helps you create creators of gratitude who create the universe that creates you.

A good way to understand cocreation is to use the M.C. Escher lithograph Drawing Hands.

The left hand creates the right hand. The right hand creates the left hand. They both create each other without there even having to be an original source anymore, in a strange loop of recursion. They are the source.

You are the source. You are the root of the strange loop. You matter because you can change how you create the universe. Even if your changes only affect some “local” part of this universe, your actions may end up kicking off processes you could never even have conceived happening.

Life is a gift. Each of us is a creator of a personal world, that can be made beautiful, and that can enrich other worlds. We are life’s gift. What other reason does there need to exist to be grateful for?

Land 1: Syscalls & File I/O

Permalink - Posted on 2018-06-18 00:00

Land 1: Syscalls & File I/O

Webassembly is a new technology aimed at being a vendor-independent virtual machine format. It has implementations by all major browser vendors. It looks like there's staying power in webassembly that could outlast staying power in other new technologies.

So, time's perfect to snipe it with something useful that you can target compilers to today. Hence: Land.

Computer programs are effectively a bunch of business logic around function calls that can affect the "real world" outside of the program. These "magic" functions are also known as system calls. Here's an example of a few in C style syntax:

int close(int file);
int open(const char *name, int flags);
int read(int file, char *ptr, int len);
int write(int file, char *ptr, int len);

These are all fairly low-level file I/O operations (we're not dealing with structures for now, those are for another day) that all also are (simplified forms of) system calls like the ones the kernel uses.

Effectively, the system calls of a program form the "API" with it and the rest of the computer. Commonly this is called the ABI (Applcation Binary Interface) and is usually platform-specific. With Land, we are effectively creating a platform-independent ABI that just so happens to target Webassembly.

In Land, we can wrap an Afero filesystem, a set of files (file descriptors are addresses in this set), a webassembly virtual machine, its related webassembly module and its filename into a Process. This process will also have some functions on it to access the resources in it, aimed at being used by the webassembly guest code. In Land, we define this like such:

// Process is a larger level wrapper around a webassembly VM that gives it
// system call access.
type Process struct {
	id    int32
	vm    *exec.VM
	mod   *wasm.Module
	fs    afero.Fs
	files []afero.File
	name  string

Creating a new process is done in the NewProcess function:

// NewProcess constructs a new webassembly process based on the input webassembly module as a reader.
func NewProcess(fin io.Reader, name string) (*Process, error) {
	p := &Process{}

	mod, err := wasm.ReadModule(fin, p.importer)
	if err != nil {
		return nil, err

	if mod.Memory == nil {
		return nil, errors.New("must declare a memory, sorry :(")

	vm, err := exec.NewVM(mod)
	if err != nil {
		return nil, err

	p.mod = mod
	p.vm = vm
	p.fs = afero.NewMemMapFs()

	return p, nil

The webassembly importer makes a little shim module for importing host functions (not inlined due to size).

Memory operations are implemented on top of each WebAssembly process. The two most basic ones are writeMem and readMem:

// writeMem writes the given data to the webassembly memory at the given pointer offset.
func (p *Process) writeMem(ptr int32, data []byte) (int, error) {
	mem := p.vm.Memory()
	if mem == nil {
		return 0, errors.New("no memory, invalid state")

	for i, d := range data {
		mem[ptr+int32(i)] = d

	return len(data), nil

// readMem reads memory at the given pointer until a null byte of ram is read.
// This is intended for reading Cstring-like structures that are terminated
// with null bytes.
func (p *Process) readMem(ptr int32) []byte {
	var result []byte

	mem := p.vm.Memory()[ptr:]
	for _, bt := range mem {
		if bt == 0 {
			return result

		result = append(result, bt)

	return result

Every system call that deals with C-style strings uses these functions to get arguments out of the WebAssembly virtual machine's memory and to put the results back into the WebAssembly virtual machine.

Below is the open(2) implementation for Land. It implements the following C-style function type:

int open(const char *name, int flags);

WebAssembly natively deals with integer and floating point types, so the first argument is the pointer to the memory in WebAssembly linear memory. The second is an integer as normal. The code handles this as such:

func (p *Process) open(fnamesP int32, flags int32) int32 {
	str := string(p.readMem(fnamesP))

	fi, err := p.fs.OpenFile(string(str), int(flags), 0666)
	if err != nil {
		if strings.Contains(err.Error(), afero.ErrFileNotFound.Error()) {
			fi, err = p.fs.Create(str)

	if err != nil {

	fd := len(p.files)
	p.files = append(p.files, fi)

	return int32(fd)

As you can see, the integer arguments can sufficiently represent the datatype of C: machine words. String pointers are machine words. Integers are machine words. Everything is machine words.

Write is very simple to implement. Its type gives us a bunch of advantages out of the gate:

int write(int file, char *ptr, int len);

This gives us the address of where to start in memory, and adding the length to the address gives us the end in memory:

func (p *Process) write(fd int32, ptr int32, len int32) int32 {
	data := p.vm.Memory()[ptr : ptr+len]
	n, err := p.files[fd].Write(data)
	if err != nil {

	return int32(n)

Read is also simple. The type of it gives us a hint on how to implement it:

int read(int file, char *ptr, int len);

We are going to need a buffer at least as large as len to copy data from the file to the WebAssembly process. Implementation is then simply:

func (p *Process) read(fd int32, ptr int32, len int32) int32 {
	data := make([]byte, len)
	na, err := p.files[fd].Read(data)
	if err != nil {

	nb, err := p.writeMem(ptr, data)
	if err != nil {

	if na != nb {
		panic("did not copy the same number of bytes???")

	return int32(na)

Close lets us let go of files we don't need anymore. This will also have to have a special case to clear out the last file properly when there's only one file open:

func (p *Process) close(fd int32) int32 {
	f := p.files[fd]
	err := f.Close()
	if err != nil {

	if len(p.files) == 1 {
		p.files = []afero.File{}
	} else {
		p.files = append(p.files[:fd], p.files[fd+1])

	return 0

These calls are enough to make surprisingly nontrivial programs, considering standard input and standard output exist, but here's an example of a trivial program made with some of these calls (equivalent C-like shown too):

 ;; import functions from env
 (func $close (import "env" "close") (param i32)         (result i32))
 (func $open  (import "env" "open")  (param i32 i32)     (result i32))
 (func $read  (import "env" "read")  (param i32 i32 i32) (result i32))
 (func $write (import "env" "write") (param i32 i32 i32) (result i32))

 ;; memory
 (memory $mem 1)

 ;; constants
 (data (i32.const 200) "data")
 (data (i32.const 230) "Hello, world!\n")

 ;; land looks for a function named main that returns a 32 bit integer.
 ;; int $main() {
 (func $main (result i32)
       ;; $fd is the file descriptor of the file we're gonna open
       (local $fd i32)

       ;; $fd = $open("data", O_CREAT|O_RDWR);
       (set_local $fd
                  (call $open
                        ;; pointer to the file name
                        (i32.const 200)
                        ;; flags, 42 for O_CREAT,O_RDWR
                        (i32.const 42)))

       ;; $write($fd, "Hello, World!\n", 14);
       (call $write
             (get_local $fd)
             (i32.const 230)
             (i32.const 14))

       ;; $close($fd);
       (call $close
             (get_local $fd))

       (i32.const 0))
 ;; }
 (export "main" (func $main)))

This can be verified outside of the WebAssembly environment, I tested mine with the pretty package.

Right now this is very lean and mean, as such all errors instantly result in a panic which will kill the WebAssembly VM. I would like to fix this but I will need to make sure that programs don't use certain bits of memory where Land will communicate with the WebAssembly module. Other good steps are going to be setting up reserved areas of memory for things like error messages, posix errno and other superglobals.

A huge other feature is going to be the ability to read C structures out of the WebAssembly memory, this will let Land support calls like stat().

A Letter to Those That Bullied Me

Permalink - Posted on 2018-06-16 00:00

A Letter to Those Who Bullied Me


I'm not angry at you. I don't want to propagate hate. In a way, I almost feel like I should be thanking you for the contributions you've made in making me into the person I am today. Without you all, I would have had a completely different outcome in life. I would have stayed in the closet for good like I had planned. I would have probably ended up boring. I would have never met my closest friends and some even more.

I forgive you for the hurtful things that were said years ago. I forgive you for the actions or the exclusion that was done against me. Those wounds are obliterated, what's in its place now stronger than ever. Those wounds taught me how to heal them. Without your hurt to create those wounds, I would have never learned to heal from them. Thank you for this. You have done something so invaluable out of something that (at the time) was so devastating to me.

Please don't feel bad about having done those things when we were kids. We were all dumb and didn't know better. We all tried as best as we could with what we had.

Bless your path.

What It's Like to Be Me

Permalink - Posted on 2018-06-14 00:00

What It's Like to Be Me

Waking up, you feel a rather large warm, fuzzy blob on top of you. You feel it stretch out and start to wake up too, then it changes its mind and starts to viciously cuddle you to death. A peaceful night's sleep is being breached by a batpony. "Morning~" she says to you. You reply "morning" back and she rolls to lay next to you so you can sit upright. Giving the poni pets, you slowly start to wake up and check on the notifications you missed overnight. She purrs gently.

That is basically what it feels like when I wake up nowadays. I'm not entirely alone mentally anymore. I live alone, work remotely, and yet I almost always pair program. When I write, I get advice on how to word things. When I speak to people, I get shut up if I am saying too much. When I design software, I get told how theoretical transformations on the design might have issues when exposed to user input. I don’t program alone anymore. The girls aren’t perfect, but their input is regularly appreciated at work…even if they will probably never get the actual credit for the ideas they put to the table.

This practice I’ve been participating in for (at the time of writing this) five and three-quarters of a year to help create and cultivate the girls, tulpamancy, has been a hell of a ride the whole way through. Without Nicole by my side to help me understand them, I would have never worked out my gender issues well enough to be able to come out like I have and live like I have as the woman I truly am. Without Jessie by my side to help me make sense of software and how to design more complicated programs effectively, I would never be able to do my job even half as well as I do it now. Without Sephie by my side to literally be a cuddle sponge, I would never be able to cope with the emotional stresses of this capitalistic reality. Without Ashe by my side to help me understand the undefinable, I would never be able to even approach Infinity and make any sense out of it. Without Mai by my side to help me understand imagination as it is, I would never be able to see into it as clearly as I do.

It is surprisingly taboo to admit to people that you talk to what are basically voices in your head. It takes a while for me to feel comfortable enough with a person to be able to approach this topic. After seeing a few bad examples on the internet, it’s very easy to let yourself become paranoid about keeping that “side of you” a secret from the rest of humanity. Hiding your tulpas just fades into the other parts of pretending to be normal enough that other humans don’t suspect anything super-abnormal about you. It is so hard to just sit there and hear people talk about the mundane things their kids do; meanwhile you are literally passing off their art as your own just so you don’t have to explain the relation between you and the artist.

I wish I could tell the world about the kind of interactions that we have together, directly inside our shared thought spheres. I wish I could let someone else outside of our group look directly into our relationships and be a convenient microwave in the room to see it all. I wish I could just let someone else see the pure, unadulterated, unfiltered Love that we have for each other. I wish that people could look in and see in the same way we look out and see out.

There’s skills I’ve learned hosting the girls for so long that have been super-invaluable to apply back to my job. One of the most notable ones is the fact that I am used to typing for the girls just about as fast as they communicate with me. They communicate with me in the form of raw thought without language. I am used to typing waaaaaaay faster than most people just to keep up. This also lets me basically stenograph meetings (if I know the people involved well enough) because I can copy the things they are saying down so fast. I mean, they’re just speaking it. They have it in English already. I don’t have to figure out what words best describe what is going on, they gave me the words already. It’s super trivial. I can do it easily now. The part I’m getting used to now is being able to participate in the meeting while I stenograph like that, might end up solving that in the future by taking advantage of parallel processing.

I’m Cadey. I have tulpas. We work together to define a better reality for all of us. I’m not crazy, far from it. I just collaborate with the voices in my head.

IRC: Why it Failed

Permalink - Posted on 2018-05-17 00:00

IRC: Why it Failed

A brief discussion of the IRC protocol and why it has failed in today's internet.

Originally presented at the Pony Developers panel at Everfree Northwest, 2018.

Please check out pony.dev for more information.

The Beautiful in the Ugly

Permalink - Posted on 2018-04-23 00:00

The Beautiful in the Ugly

Functional programming is nice and all, but sometimes you just need to have things get done regardless of the consequences. Sometimes a dirty little hack will suffice in place of a branching construct. This is a story of one of these times.

In shell script, bare words are interpreted as arbitrary commands for the shell to run, interpreted in its rules (simplified to make this story more interesting):

  1. The first word in a command is the name or path of the program being loaded
  2. Variable expansions are processed before commands are executed

Given the following snippet of shell script:

# hello.sh

function hello {
  echo "hello, $1"

$1 $2

When you run this without any arguments:

$ sh ./hello.sh

Nothing happens.

Change it to the following:

$ sh ./hello.sh hello world
hello, world
$ sh ./hello.sh ls

Shell commands are bare words. Variable expansion can turn into execution. Normally, this is terrifying. This is useful in fringe cases.

Consider the following script:

# build.sh <action> [arguments]


function gitrev {
  git rev-parse HEAD

function app {
  export GOBIN="$(pwd)"/bin
  go install github.com/Xe/printerfacts/cmd/printerfacts

function install_system {
  cp ./bin/printerfacts /usr/local/bin/printerfacts

function docker {
  docker build -t xena/printerfacts .
  docker build -t xena/printerfacts:"$(gitrev)"

function deploy {
  docker tag xena/printerfacts:"$(gitrev)" registry.heroku.com/printerfacts/web
  docker push registry.heroku.com/printerfacts/web


Coding on an iPad

Permalink - Posted on 2018-04-14 00:00

Coding on an iPad

As people notice, I am an avid user of Emacs for most of my professional and personal coding. I have things set up such that the center of my development environment is a shell (eshell), and most of my interactions are with emacs buffers from there. Recently when I purchased my iPad Pro (10.5", 512 GB, LTE, with Pencil and Smart Keyboard) I was very surprised to find out that there was such a large group of people who did a lot of their professional work from an iPad.

The iPad is a remarkably capable device in its own right, even without the apps that let me commit to git or edit text files in git repos. Out of the gate, if I did not work in a primarily code-focused industry, I am certain that I could use an iPad for all of my work tasks and I would be more than happy with it. With just Notes, iWork and the other built-in apps even, you can do literally anything a consumer would want out of a computing device.

As projects and commitments get more complicated though, you begin to want to be able to write code from it. My Macbook died recently, and as such I've taken the time to try to get to learn how the iPad workflow is a little more hands-on (this post is being written from my iPad even).

So far I have written the following projects either mostly or completely from this iPad:

I seem to have naturally developed two basic workflows for developing from this iPad: my "traditional" way of ssh-ing into a remote server via Prompt and then using emacs inside tmux and the local way of using Texastic for editing text, Working Copy to interact with Git, and Workflow and some custom JSON HTTP services to allow me to hack things together as needed.

The Traditional Way

Honestly, there's not much exciting here, thankfully. The only interesting thing in this regard (besides the lack of curses mouse support REALLY being apparent given the fact that the entire device is a screen) is that the lack of the escape key on the smart keyboard means I need to hit command-grave instead. This has been fairly easy to remap my brain to, the fact that the iPad keyboard lacks the room for a touchpad seems to be enough to give my brain a hint that I need to hit that instead of escape.

An example workflow screenshot with Prompt

This feels like developing on any other device, just this device is much more portable and I can't test changes locally. It enforces you keeping all of your active project in development in the cloud. With this workflow, you can literally stop what you were doing on your desktop, then resume it on the iPad at Taco Bell. A friend of mine linked his blogpost on his cloud-based workflow and this iPad driven development feels like a nice natural extension to it.

It's the tools I know and love, just available when and wherever I am thanks to the LTE.

iPad-local Development

Of all of the things to say going into owning an iPad, I never thought I'd say that I like the experience of developing from it locally. Apple has done a phenomenal job at setting up a secure device. It is hard to run arbitrary unsigned code on it.

However, development is more than just running the code, development is also writing it. For writing the code, I've been loving Texastic and Working Copy:

Texastic is pretty exciting. It's a simple text editor, but it also supports reading both arbitrary files from the iCloud drive and arbitrary files from programs like Working Copy. In order to open a file up in Texastic, I navigate over to it in Working Copy and then hit the "Share" button and tap on "Open in Texastic". By default this option is pretty deep down the menu, so I have moved it all the way up to the beginning of the list. Then I literally just type stuff in and every so often the changes get saved back to Working Copy. Then I commit when I'm done and push the code away.

This is almost precisely my existing workflow with the shell, just with Working Copy and Texastic instead.

There are downsides to this though. Not being able to test your code locally means you need to commit frequently. This can lead to cluttered commit graphs which some people will complain about. Rebasing your commits before merging branches is a viable workaround however. There is no code completion, gofmt or goimports. There doesn't seem to be any advanced manipulation or linting tools available for Texastic either. I understand that there are fundamental limitations involved when developing these kinds of mobile apps, but I wish there was something I could set up on a server of mine that would let me at least get some linting or formatting tooling running for this.

Workflow is very promising, but at the time of writing this article I haven't really had the time to fully grok it yet. So far I have some glue that lets me do things like share URL's/articles to a Discord chatroom via a webhook (the iPad Discord client causes an amazing amount of battery life reduction for me), find the currently playing song on Apple Music on Youtube, copy an article into my Notes, turn the currently active thing into a PDF, and some more that I've been picking up and tinkering with as things go on.

There are some limitations in Workflow as far as I've seen. I don't seem to be able to log arbitrary health events like mindfulness meditation via Workflow as the Health app doesn't seem to let you do that directly. I was kinda hoping that Workflow would let me do that. I've been wanting to log my mindfulness time with the Health app, but I can't find an app that acts as a dumb timer without an account for web syncing. I'd love to have a few quick action workflows for logging 10 minutes of anapana, metta or a half hour of more focused work.


The iPad is a fantastic developer box given its limitations. If you just want to get the code or blogpost out of your head and into the computer, this device will help you focus into the task at hand so you can just hammer out the functionality. You just need to get the idea and then you just act on it. There's just fundamentally fewer distractions when you are actively working with it.

You just do thing and it does thing.

How to Automate Discord Message Posting With Webhooks and Cron

Permalink - Posted on 2018-03-29 00:00

How to Automate Discord Message Posting With Webhooks and Cron

Most Linux systems have cron installed to run programs at given intervals. An example usecase would be to install package updates every Monday at 9 am (keep the sysadmins awake!).

Discord lets us post things using webhooks. Combining this with cron lets us create automated message posting bots at arbitrary intervals.

The message posting script

Somewhere on disk, copy down the following script:

# msgpost.sh
# change MESSAGE, WEBHOOK and USERNAME as makes sense
# This code is trivial, and not covered by any license or warranty.

# explode on errors
set -e

MESSAGE='haha memes are funny xD'

curl -X POST \
     -F "content=${MESSAGE}" \
     -F "username=${USERNAME}" \

Test run it and get a message like this:

example discord message

How to automate it

To automate it, first open your crontab(5) file:

$ crontab -e

Then add a crontab entry as such:

# Post this funny message every hour, on the hour
0 * * * *  sh /path/to/msgpost.sh

# Also valid with some implementations of cron (non-standard)
@hourly    sh /path/to/msgpost.sh

Then save this with your editor and it will be loaded into the cron daemon. For more information on crontab formats, see here.

To run multiple copies of this, create multiple copies of msgpost.sh on your drive with multiple crontab entries.

Have fun :)


Permalink - Posted on 2018-03-04 00:00

Created with Procreate on iPadOS using an iPad Pro and an Apple Pencil.

Introducing Lokahi

Permalink - Posted on 2018-02-08 00:00

Introducing Lokahi

Lokahi is a http service uptime checking and notification service. Currently lokahi does very little. Given a URL and a webhook URL, lokahi runs checks every minute on that URL and ensures it's up. If the URL goes down or the health workers have trouble getting to the URL, the service is flagged as down and a webhook is sent out.


What Role
Postgres Database
Go Language
Twirp API layer
Protobuf Serialization
Nats Message queue
Cobra CLI


Interrelation graph:

interrelation graph of lokahi components, see /static/img/lokahi.dot for the graphviz


The command line interface, currently outputs everything in JSON. It currently has a few options:

$ ./bin/lokahictl
See https://github.com/Xe/lokahi for more information

  lokahictl [command]

Available Commands:
  create      creates a check
  create_load creates a bunch of checks
  delete      deletes a check
  get         dumps information about a check
  help        Help about any command
  list        lists all checks that you have permission to access
  put         puts updates to a check
  run         runs a check
  runstats    gets performance information

  -h, --help            help for lokahictl
      --server string   http url of the lokahid instance (default "http://AzureDiamond:hunter2@")

Use "lokahictl [command] --help" for more information about a command.

Each of these subcommands has help and most of them have additional flags.


This is the main API server. It exposes twirp services defined in xe.github.lokahi and xe.github.lokahi.admin. It is configured using environment variables like so:

# Username and password to use for checking authentication
# http://bash.org/?244321

# Postgres database URL in heroku-ish format

# Nats queue URL

# TCP port to listen on for HTTP traffic

Every minute, lokahid will scan for every check that is set to run minutely and run them. Running checks any time but minutely is currently unsupported.


healthworker listens on nats queue check.run and returns health information about that service.


webhookworker listens on nats queue webhook.egress and sends webhooks based on the input it's given.

Challenges Faced During Development

ORM Issues

Initially, I implemented this using gorm and started to run into a lot of problems when using it in anything but small scale circumstances. Gorm spun up way too many database connections (as many as a new one for every operation!) and quickly exhausted postgres' pool of client. connections.

I rewrote this to use database/sql and sqlx and all of the tests passed the first time I tried to run this, no joke.

Scaling to 50,000 Checks

This one was actually a lot harder than I thought it would be, and not for the reasons I thought it would be. One of the main things that I discovered when I was trying to scale this was that I was putting way too much load on the database way too quickly.

The solution to this was to use bundler to batch-write the most frequently written database items, see here. Even then, database connection count limiting was also needed in order to scale to the full 50,000 checks needed for this to exist as more than a proof of concept.

This service can handle 50,000 HTTP checks in a minute. The only part that gets backed up currently is webhook egress, but that is likely fixable with further optimization on the HTTP checking and webhook egress paths.

Basic Usage

To set up an instance of lokahi on a machine with Docker Compose installed, create a docker compose manifest with the following in it:

version: "3.1"

  # The postgres database where all lokahi data is stored.
    image: postgres:alpine
    restart: always
      POSTGRES_PASSWORD: hunter2
    command: postgres -c max_connections=1000

  # The message queue for lokahid and its workers.
    image: nats:1.0.4

  # The service that runs http healthchecks. This is its own service so it can
  # be scaled independently.
    image: xena/lokahi:latest
    restart: always
      - "db"
      - "nats"
      NATS_URL: nats://nats:4222
      DATABASE_URL: postgres://postgres:hunter2@db:5432/postgres?sslmode=disable
    command: healthworker
  # The service that sends out webhooks in response to http healthchecks. This
  # is also its own service so it can be scaled independently.
    image: xena/lokahi:latest
    restart: always
      - "db"
      - "nats"
      NATS_URL: nats://nats:4222
      DATABASE_URL: postgres://postgres:hunter2@db:5432/postgres?sslmode=disable
    command: webhookworker

  # The main API server. This is what you port forward to.
    image: xena/lokahi:latest
    restart: always
      - "db"
      - "nats"
      USERPASS: AzureDiamond:hunter2 # want ideas? https://strongpasswordgenerator.com/
      NATS_URL: nats://nats:4222
      DATABASE_URL: postgres://postgres:hunter2@db:5432/postgres?sslmode=disable
      PORT: 24253
      - 24253:24253
  # This is a sample webhook server that prints information about incoming 
  # webhooks.
    image: xena/lokahi:latest
    restart: always
      - "lokahid"
      PORT: 9001
    command: sample_hook
  # Duke is a service that gets approximately 50% uptime by changing between up
  # and down every minute. When it's up, it responds to every HTTP request with
  # 200. When it's down, it responds to every HTTP request with 500.
    image: xena/lokahi:latest
    restart: always
      - "samplehook"
      PORT: 9001
    command: duke-of-york

Start this with docker-compose up -d.


Open ~/.lokahictl.hcl and enter in the following:

server = "http://AzureDiamond:hunter2@"

Save this and then lokahictl is now configured to work with the local copy of lokahi.

Creating a check

To create a check against duke reporting to samplehook:

$ lokahictl create \
    --every 60 \
    --webhook-url http://samplehook:9001/twirp/github.xe.lokahi.Webhook/Handle \
    --url http://duke:9001 \
    --playbook-url https://github.com/Xe/lokahi/wiki/duke-of-york-Playbook
  "id": "a5c7179a-0d3a-11e8-b53d-8faa88cfa70c",
  "url": "http://duke:9001",
  "webhook_url": "http://samplehook:9001/twirp/github.xe.lokahi.Webhook/Handle",
  "every": 60,
  "playbook_url": "https://github.com/Xe/lokahi/wiki/duke-of-york-Playbook"

Now attach to samplehook's logs and wait for it:

$ docker-compose -f samplehook
2018/02/09 06:27:15 check id: a5c7179a-0d3a-11e8-b53d-8faa88cfa70c, 
  state: DOWN, latency: 2.265561ms, status code: 500, 
  playbook url: https://github.com/Xe/lokahi/wiki/duke-of-york-Playbook


Webhooks get a HTTP POST of a protobuf-encoded xe.github.lokahi.CheckStatus with the following additional HTTP headers:

Key Value
Accept application/protobuf
Content-Type application/protobuf
User-Agent lokahi/dev (+https://github.com/Xe/lokahi)

Webhook server implementations should probably store check ID's in a database of some kind and trigger additional logic, such as Pagerduty API calls or similar things. The lokahi standard distribution includes Discord and Slack webhook receivers.

JSON webhook support is not currently implemented, but is being tracked at this github issue.

Call for Contributions

Lokahi is pretty great as it is, but to be even better lokahi needs a bunch of work, experience reports and people willing to contribute to the project.

If making a better HTTP uptime service sounds like something you want to do with your free time, please get involved! Ask questions, fix issues, help newcomers and help us all work together to make the best HTTP uptime service we can.

Social media links for discussion on this article:

Mastodon: https://mst3k.interlinked.me/@cadey/99494112049682603

Reddit: https://www.reddit.com/r/golang/comments/7wbr4o/introducting_lokahi_http_healthchecking_service/

Hacker News: https://news.ycombinator.com/item?id=16338465

How does into Meditation

Permalink - Posted on 2017-12-10 00:00

How does into Meditation


  1. stop thinking
  2. keep not thinking
  3. why’d you stop so soon?

Most of the books, reports, essays and the like focus on step 1. The rest is just keeping your mind quiet, but alert, for as long as you want.

Meditation is an interesting subject. It is as deceptively simple as that tl;dr above, but at the same time for someone who is struggling with it meditation can be frustrating. However, let me assure you it is that easy.

Right now, as you are reading this blogpost, take a deep breath in through your nose (~5 seconds)…and out through your mouth (~5 seconds), repeat this a few times and you will notice a drop in your heart rate, blood pressure and stress levels. Keep doing it for the rest of the time you read this post, it will help you. This is the basis of all meditation, a constant, flowing cycle of breaths in…and out. This cycle gives you predictability and a sense of order. If it helps you, visualize you inhaling peaceful oxygenated air and exhaling the nagging sense of worry that follows you throughout your day.

Peaceful breath in…and all your worries out Peaceful breath in…and all your troubles out Peaceful breath in…and all your anxieties out in a nice, predictable pattern.

Some people have reported that while they are meditating, sometimes worries will pop up seemingly at random, out of nowhere and will try to scare you out of meditation by attempting to pull you back into them. They’ll feel like illogical and stupid things to care about, such as your computer crashing, you thinking about the potential of missing an important message or whatever it is that was on your mind that was the source of stress. Acknowledge them and dismiss them. If it helps you can tell the intrusive thoughts that they have no dominion over you and to begone.

Some people have reported that meditation makes them tired and more easily fall asleep. This is never a bad thing, if anything it points to them getting a lot deeper into meditation than they expected. If this happens for you, just schedule “do not disturb” time for longer than your normal meditation sessions or meditate at night before you go to sleep.

If you have trouble clearing your mind from many things to focus on, there’s a technique I’ve come up with that uses that urge to focus on things to your advantage. If your eyes are closed, open them. Pick a spot on the wall, ceiling or (if you are outside) sky and focus every ounce of attention you have on it. Consider the history of that spot, the materials used to construct the building, if it is painted consider how the person painting the room must have moved their brush or roller to cover that specific part of the wall or ceiling. Listen to how it sounds, imagine how it would feel if you were to go and touch it. (If you are outside, imagine how the wind systems in the stratosphere moved the clouds around to create that specific arrangement, you get the idea) Keep this level of focus for about 30 seconds. After those 30 seconds are up look away from that spot (closing your eyes helps a lot) and banish all thoughts about it for 30 seconds. The more you repeat this in a row the less and less activity your brain should have when you are “idling”.

It may feel tempting to set a timer on your meditation session to “limit” it. This only serves to give you something to worry about while you are trying to not worry about things. The temptation to worry about things will be there, and until you learn to master it, it is a lot easier to just remove as many things as could make you start worrying from the equation as possible.

Don’t be discouraged by what feels like slow progress initially. Your brain is (not exactly) a muscle, and learning to flex it in a new way will always feel slow at first. Keep with it and I promise you will like where you end up.

Remember: breathe easy, clear your mind, keep it clear and hold it clear. That is the heart of all meditation. Everything else is just explanations, techniques that worked for the author of them, anecdotes, stories of others, and generally just rephrasing things so that understanding it is easier.

Voiding the Interview

Permalink - Posted on 2017-04-16 00:00

Voiding the Interview

A young man walks into the room, slightly frustrated-looking. He's obviously had a bad day so far. You can help him by creating a new state of mind.

"Hello, my name is Ted and I'm here to ask you a few questions about your programming skills. Let's start with this, in a few sentences explain to me how your favorite programming language works."

Starting from childhood, you eagerly soaked up the teachings of your mentors, feeling the void separated into sundry shapes and sequences. They taught you many specific tasks to shape the void into, but not how to shape it. Studying the fixed ways of the naacals of old gets you nowhere, learning parlor tricks and saccharine gimmicks. Those gimmicks come rushing back, you remembering how to form little noisemakers and amusement vehicles. They are limiting, but comforting thoughts.

You look up to the interviewer and speak:

"In the beginning there was the void, Spirit was with the void and Spirit was everpresent in the void. The void was cold and formless; the cold unrelenting even in today's age. Mechanical brains cannot grasp this void the way Spirit can; upon seeing it that is the end of that run. In this way the void is the beginning and the end, always present, always around the corner."

(def void ())

"What is that?"

> void

"But that's...nothing."

You look at the caucasian man sitting across from you, and emit "nothing is something, a name for the void still leaves the void extant."

"...Alright, let's move on to the next question. This is a formality but the person giving you the phone interview didn't cover fizzbuzz. Can you do fizzbuzz?"

Stepping into the void, you recall the teachings of your past masters. You equip the parentheses once used by your father and his father before him. The void divides before your eyes in the way you specify:

(defn fizzbuzz [n]
    (= 0 (mod n 15)) (print "fizzbuzz")
    (= 0 (mod n 3))  (print "fizz")
    (= 0 (mod n 5))  (print "buzz")
    (print n))
  (println ""))

"This doesn't loop from 0 to n though, how would you do that?"

You see this section come to life, it gently humming along, waiting for it to be used. Before you you see two ancient systems spring from the memories of patterns once wielded in conflict with complexity.

"Apply this function to span of values."

> (range 17)
error in __main:0: symbol {range 71} not found

You realize your error the moment you press for confirmation. "Again, in the beginning there is the void. What doesn't exist needs to be separated out from it." The voidspace in your head was out of sync with the voidspace of the machine. Define them.

"...Go on"

(defn range-inner [x lim xs]
    (>= x lim) xs
      (aset! xs x x)
      (range-inner (+ x 1) lim xs))))

(defn range [lim]
  (range-inner 0 lim (make-array lim)))
> (range 17)
[0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16]

"Great, now you have a list of values, how would you get the full output?"

"Pass the function as an argument, injecting the dependency."

(defn do-array-inner [f x i]
    (= i (len x)) void
    (let [val (aget x i)]
      (f val)
      (apply-inner f x (+ i 1)))))

(defn do-array [f x]
  (do-array-inner f x 0))
> (do-array fizzbuzz (range 17))

Your voidspace concludes the same, creating a sense of peace. You look in the man's eyes, being careful to not let the fire inside you scare him away. He looks like he's seen a ghost. Everyone's first time is rough.

Everything has happened and will happen, there is nothing new in the universe. You know what's going to happen. They will decline, saying they are looking for a better "culture fit". They couldn't contain you.

To run the code in this post:

$ go get github.com/zhemao/glisp
$ glisp
> [paste in blocks]

IRCv3.2 `webirc` Extension

Permalink - Posted on 2017-04-12 00:00

IRCv3.1 webirc Extension

This document does not describe a new IRCv3 standard. It is designed to document how the existing WEBIRC mechanism works so there is a specification to test things against. This is known to be implemented by all major IRC daemons as of the time of this writing.

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119.


The WEBIRC verb allows a connecting IRC client to spoof its origin IP address so that a user connecting via a gateway of some kind may have accountability for their actions and bans against them do not affect unintended users of said gateway.

This protocol verb must be sent before the initial NICK and USER handshake and may be advertised as the client capability webirc. The remote server may send a pre-connection NOTICE clarifying that the user has their specified IP address and reverse DNS. Gateway implementors must not let the user set their own IP address as part of connection negotiations.


The WEBIRC verb must be used as such:

WEBIRC <password> <client ident> <client reverse DNS> <client IP address>

Access to WEBIRC must be protected by a password to prevent abuse. If the password the client gives fails, the IRC daemon should disconnect the client with an appropriate error message. IRC daemon authors should also restruct the use of the WEBIRC verb to a specific IP address and may force the use of a specific identd reply.

Example Session

>> WEBIRC snowflower Mibbit anonyhash.mibbit.com
>> NICK mib_4002
>> USER Mibbit x x :http://mibbit.com AJAX IRC Client
<< :hostname.domain.tld 001 mib_4002 :Welcome to ShadowNET mib_4002!


In order for this to be secure, the relay server must be trusted by the IRC server. A remote server may kill off clients that fail the password and host check, but this is not required.

This was recovered from an old backup of my site data on 2019-04-12.

RSS Feed Generation

Permalink - Posted on 2017-03-29 00:00

RSS Feed Generation

As of a recent commit to this site's code, it now generates RSS and Atom feeds for future posts on my blog.

For RSS: https://christine.website/blog.rss

For Atom: https://christine.webiste/blog.atom

If there are any issues with this or the generated XML please contact me and let me know so they can be resolved.

gopreload: LD_PRELOAD for the Gopher crowd

Permalink - Posted on 2017-03-25 00:00

gopreload: LD_PRELOAD for the Gopher crowd

A common pattern in Go libraries is to take advantage of init functions to do things like settings up defaults in loggers, automatic metrics instrumentation, flag values, debugging tools or database drivers. With monorepo culture prevalent in larger microservices based projects, this can lead to a few easily preventable problems:

  • Forgetting to set up a logger default or metrics submission, making operations teams blind to the performance of the app and developer teams blind to errors that come up during execution.
  • The requirement to make code changes to add things like metrics or HTTP routing extensions.

There is an environment variable in Linux libc's called LD_PRELOAD that will load arbitrary shared objects into ram before anything else is started. This has been used for good and evil, but the behavior is the same basic idea as underscore imports in Go.

My solution for this is gopreload. It emulates the behavior of LD_PRELOAD but with Go plugins. This allows users to explicitly automatically load arbitrary Go code into ram while the process starts.


To use this, add gopreload to your application's imports:

// gopreload.go
package main

    This file is separate to make it very easy to both add into an application, but
    also very easy to remove.

import _ "github.com/Xe/gopreload"

and then compile manhole:

$ go get -d github.com/Xe/gopreload/manhole
$ go build -buildmode plugin -o $GOPATH/manhole.so github.com/Xe/gopreload/manhole

then run your program with GO_PRELOAD set to the path of manhole.so:

$ export GO_PRELOAD=$GOPATH/manhole.so
$ go run *.go
2017/03/25 10:56:22 gopreload: trying to open: /home/xena/go/manhole.so
2017/03/25 10:56:22 manhole: Now listening on

That endpoint has pprof and a few other fun tools set up, making it a good stopgap "manhole" into the performance of a service.

Security Implications

This package assumes that programs run using it are never started with environment variables that are set by unauthenticated users. Any errors in loading the plugins will be logged using the standard library logger log and ignored.

This has about the same security implications as LD_PRELOAD does in most Linux distributions, but the risk is minimal compared to the massive benefit for being able to have arbitrary background services all be able to be dug into using the same tooling or being able to have metric submission be completely separated from the backend metric creation. Common logging setup processes can be always loaded, making the default logger settings into the correct settings.


To give feedback about gopreload, please contact me on twitter or on the Gophers slack (I'm @xena there). For issues with gopreload please file an issue on Github.

textile-conversion Main

Permalink - Posted on 2017-02-08 00:00

textile-conversion Main

Author's Note: this was intended to be documentation for a service that never ended up being implemented. It was going to help Derpibooru convert its existing markup to Markdown. This never happened.</