What is a JSON feed? Learn more

JSON Feed Viewer

Browse through the showcased feeds, or enter a feed URL below.

Now supporting RSS and Atom feeds thanks to Andrew Chilton's feed2json.org service

CURRENT FEED

Christine Dodrill's Blog

My blog posts and rants about various technology things.

A feed by Christine Dodrill

JSON


Wasmcloud Progress: Hello, World!

Permalink - Posted on 2019-12-08 00:00, modified on 0001-01-01 00:00

Wasmcloud Progress: Hello, World!

I have been working off and on over the years and have finally created the base of a functions as a service backend for WebAssembly code. I’m code-naming this wasmcloud. Wasmcloud is a pre-alpha prototype and is currently very much work in progress. However, it’s far enough along that I would like to explain what I have been doing for the last few years and what it’s all built up to.

Here is a high level view of all of the parts that make up wasmcloud and how they correlate:

wasmcloud graphviz dependency map

Land: The Beginning

A little bit after I found WebAssembly I started to play with it. It seemed like it was too good to be true. A completely free and open source VM format that would run on almost any platform? Sounds like the kind of black magick witchcraft you hear about on Star Trek.

However, I kept at it and continued experimenting. I eventually came up with Land. This was a very simple thing and was really used to help me invent Dagger.

Dagger was an attempt at an incredible amount of minimalism. I based it on an extreme interpretation of the Unix philosophy (everything is a file -> everything is a bytestream) combined with some Plan 9 for flavor. It had only 5 system calls:

  • open() - opens a stream by URL, returning a stream descriptor
  • close() - closes a stream descriptor
  • read() - reads from a stream
  • write() - writes to a stream
  • flush() - flushes intermediate data and turns async behavior into syncronous behavior

And yet this was enough to implement a HTTP client.

The core guiding idea was that a cloud-native OS API should expose internet resources as easily as it exposes native resources. It should be as easy to use WebSockets as it is to use normal sockets. Additionally, all of the details should be abstracted away from the WebAssembly module. DNS resolution is not its job. TLS configuration is not its job. Its job is to run your code. Everything else should just be provided by the system.

I wrote a blogpost about this work and even did a talk at GoCon Canada about it.

And this worked for several months as I learned WebAssembly and started to experiment with bigger and better things.

Olin: Phase 2

Land taught me a lot. I started to quickly run into the limits of Dagger though. I ended up needing calls like non-cryptographic entropy, environment variables, command-line arguments and getting the current time. After doing some research (and trying/failing to implement my own such API based on newlib) I found a library and specification called CommonWA. This claimed to offer a lot of what I was looking for. Namely URLs as filenames and all of the host interop support I could hope for. I named this platform Olin, or the One Language Intelligent Network.

However the specification was somewhat dead. The author of it had largely moved on to more ferrous pastures and I became one of the few users of it. I ended up forking the specification and implementing my view of what it should be.

I ended up implementing a Rust implementation of the guest -> host API for the Webassembly side of things. I forked some of the existing Rust code for this and gradually started adding more and more things. The test harness is the biggest wasm program I’ve written for a while. Seriously, there’s a lot going on there. It tests every single function exposed in the CWA spec as well as all of the schemes I had implemented.

Over time I ended up testing Olin in more and more places and on more and more hardware. As a side effect of all of this being pure go, it was very easy to cross compile for PowerPC, 32 bit arm (including a $9 arm board that lives under my desk) and even other targets that gccgo supports. I even ended up porting part of TempleOS to Olin as a proof of concept, but have more plans in the future for porting other parts of its kernel as a way to help people understand low-level operating system development.

I’ve even written a few blogposts about Olin:

But, this was great for running stuff interactively and via the command line. It left me wanting more. I wanted to have that mythical functions as a service backend that I’ve been dreaming of. So, I created wasmcloud.

h

As an interlude, I also created the h programming language during this time as a satirical parody of V. This ended up helping me test a lot of the core functionality that I had built up with Olin. Here’s an example of a program in h:

h

And this compiles to:

(module
 (import "h" "h" (func $h (param i32)))
 (func $h_main
       (local i32 i32 i32)
       (local.set 0 (i32.const 10))
       (local.set 1 (i32.const 104))
       (local.set 2 (i32.const 39))
       (call $h (get_local 1))
       (call $h (get_local 0))
 )
 (export "h" (func $h_main))
)

This ends up printing:

h

I think this is the smallest (if not one of the smallest) quine generator in the world. I even got this program running on bare metal:

Wasmcloud

Wasmcloud is the culmination of all of this work. The goal of wasmcloud is to create a functions as a service backend for running people’s code in an isolated server-side environment.

Users can use the wasmcloud command line tool to do everything at the moment:

$ wasmcloud
Usage: wasmcloud <flags> <subcommand> <subcommand args>

Subcommands:
        commands         list all command names
        flags            describe all known top-level flags
        help             describe subcommands and their syntax

Subcommands for api:
        login            logs into wasmcloud
        whoami           show information about currently logged in user

Subcommands for handlers:
        create           create a new handler
        logs             shows logs for a handler

Subcommands for utils:
        namegen          show information about currently logged in user
        run              run a webassembly file with the same environment as production servers


Top-level flags (use "wasmcloud flags" for a full list):
  -api-server=http://wasmcloud.kahless.cetacean.club:3002: default API server
  -config=/home/cadey/.wasmc.json: default config location

This tool lets you do a few basic things:

  • Authenticate with the wasmcloud server
  • Create handlers from WebAssembly files that meet the CommonWA API as realized by Olin
  • Get logs for individual handler invocations
  • Run WebAssembly modules locally like they would get run on wasmcloud

Nearly all of the complexity is abstracted away from users as much as possible.

Future Steps

In the future I hope to do the following things:

  • Support updating handlers to new versions of the code
  • Support live-streaming of logs
  • Support handler deletion
  • Support bulk queue export
  • Support wasi for easier interoperability
  • Support more resource types such as websockets
  • Investigate porting the wasmcloud executor to Rust
  • Documentation/a book on how to use wasmcloud
  • Create an easier way to create accounts that can make handlers
  • Deploy to production somewhere

GReeTZ

Every single one of these people was immeasurably helpful in this research over the years.

And many more I can’t remember because it’s been so many.


If you want to support my work, please do so via Patreon. It really means a lot to me and helps to keep the dream alive!


Toast Sandwich Recipe

Permalink - Posted on 2019-12-02 00:00, modified on 0001-01-01 00:00

Toast Sandwich Recipe

Toast sandwiches. The concept may seem bizarre but the result is actually quite a delicious traditional meal. My great grandmother (twice removed) made these every day for us whenever we came over to visit. On her deathbed she made us swear that we would spread the joy and craft of toast sandwiches to the world.

Toast sandwiches date back to rural parts of England. A recipe book from 1861 is seen as the authoritative view of this practice. The book is a collection of recipes for various types of sandwiches. The first recipe is for a roast beef sandwich with a white bread and a slice of fresh tomato. This classic book truly stood the test of time and made it possible for future culinary artists to create a desired experience.

A lot of the sandwich recipes are also available in English and French. I’ve been making these with my grandmother’s recipe and I’ve been making them for our family for years. I’m sure that the recipes are not only delicious but also practical and well-suited to the modern day. We’ve been making our own bread and using a variety of ingredients and techniques to make the sandwiches.

I also have a recipe for a classic Italian sandwich. It’s a very popular Italian sandwich and I’ve made it many times. It’s a great recipe to make and it’s a great way to get a taste of Italian culture and food. I’m proud of it all.

Toast is an essential of the modern breakfast menu. It is created using a few fantastically complicated scientific processes yet it’s trivial enough that you can buy a machine for $20 that will do it for you. There’s even a fully automated toaster that capitalism won’t let us have. It’s a good thing we have a toaster. It’s also a good thing that it’s expensive. I’m not going to spend a fortune on a toaster. I’m going to buy a toaster.

But we don’t have a toaster in our house. We have a toaster oven. And a toaster oven is a very simple appliance to make. All you need is a toaster and a bit of time. It’s very easy to make. It takes about 10 minutes and you can use it to toast your eggs, toast your toast, toast your toast and toast your toast. It’s that simple. But it’s not that simple. It’s not that easy to make. And it’s not that easy to do. We have no toaster oven in our house. We have a toaster oven that we buy at the store.

My good friend Nicole loves these sandwiches, making it the thing she asks for time and time again. And she makes them for me. And I love them. And I’m going to show you how to make them for yourself and your friends. You’re going to make these sandwiches. You’re going to make these sandwiches.

By the way, have you heard about our lord and savior the instant pot? It’s a pressure cooker made for the busy person in your life. It’s a pressure cooker that you can use to make a batch of sandwiches in less than 30 minutes including the time it takes to cool down.

Thanks for reading my article on toast sandwiches. Hopefully this should help you make them. Don’t forget the salt.


Orca Stranding

Permalink - Posted on 2019-11-16 00:00, modified on 0001-01-01 00:00

Created with Procreate on iPadOS using an iPad Pro and an Apple Pencil.

Time-lapse video

Idea from this screenshot of Death Stranding.


The Gears and The Gods

Permalink - Posted on 2019-11-14 00:00, modified on 0001-01-01 00:00

The Gears and The Gods

If there are any gods in computing, they are the authors of compilers. The output of compilers is treated as a Heavenly Decree, sometimes used for many sprints or even years after the output has been last emitted.

People trust this output to be Correct. To tell the machine what to do and by its will it be done. The compiler is itself a factory of servitors, each bound by the unholy runes inscribed into it in order to make the endless sequence of lights change colors in the right patterns.

The output of the work of the Gods is stored for later use when their might is needed. The work of the Gods however is a very fickle beast. Their words of power only make the gears turn when they are built with very specific gearing.

This means that people who rely on these sacred runes have to chain themselves to gearing patterns. Each year new ways of tricking the gears to run faster are developed. The ways the gears turn can be learned to be abused however to spill the secrets other gears are crunching on. These gearing patterns haven’t seen any real fundamental design changes in decades, because you never know when the output of the Old Gods is needed.

This means that the gears themselves are the chains that bind people to the past. The gears of computation. The gears made of sand we tricked into thinking with lightning.

But now the gears show their age. The gearing on the side of the gearing on the side of the gearing on the side of the gearing shows its ugly head.

But the Masses never question it. Even though they take hit after hit to performance of the gears.

What there needs to be is some kind of Apocalypse, a revealing of the faults in the gears. Maybe then the Masses will start to question their blind loyalty and chains binding them to the gears. Maybe they would be able to even try other gear patterns.

But this is just fantasy, nobody would WILLINGLY change the gearing patterns.

Would they?

But what about the experience they’ve come to expect from their old gears? Where they could swap out inputs to the gears with ease. Where the Output of the Gods of old still functions.

There needs to be a Better Way to switch gearings. But this kind of solution isn’t conducive to how people use the gears. People use the gears they do because they don’t care. They just want things to work “like they expect it to” and ignore things that don’t feed this addiction.

And THIS is why I’m such a big advocate for WebAssembly on the server. This lets you take the output of the Gods and store it in a way that it can be transparently upgraded to new sets of gearing. So that the future and the past can work in unison instead of being enemies.

Now, all that’s left is to build a bridge. A bridge that will help to unite the past, the present and the future into a woven masterpiece of collaborative cocreation. Where the output of the gods is a weaker chain to the gears of old and can easily be adapted to the gears of new. Even the gears that nobody’s even dreamed of yet.


Death Stranding Review

Permalink - Posted on 2019-11-11 00:00, modified on 0001-01-01 00:00

Death Stranding Review

NOTE: There’s gonna be spoilers here. Do not read if you are not okay with this. For a summary of the article without spoilers, this game is 10 out of 10 game of the year 2019 for me.

I have also been playing through this game on twitch and have streams archived here.

There’s a long-standing rule of thumb to tell fiction apart from non-fiction. Fiction needs to make sense to the reader. Non-fiction does not. Death Stranding puts this paradigm on its head. It doesn’t make sense out of the gate in the way most AAA games make sense.

In many AAA games it’s very clear who the Big Bad is and who the John America is. John America defeats the Big Bad and spreads Freedom to the masses by force. In Death Stranding, you have no such preconceptions going into it. The first few hours are a chaotic mess of exposition without explanation. First there’s a storm, then there’s monsters, then there’s a baby-powered locator, then you need to deliver stuff a-la fetch quests, then there’s Monster energy drinks, and the main currency of this game is Facebook likes (that mean and do absolutely nothing).

In short, Death Stranding doesn’t try to make sense. It leaves questions unanswered. And this is honestly so refreshing in a day and age where entire plot points and the like are spoiled in trailers before the game’s release date is even announced. Questions like: what is going on? Why are there monsters? What is the point of the game? Why the hell are there Monster energy drinks in your private room and canteen? Death Stranding answers only some of these over the course of gameplay.

The core of the gameplay loop is delivering cargo from point a to point b across a ruined America after the apocalypse. The main character is an absolute unit of a lad, able to carry over 120 kilograms of cargo on his back. As more and more cargo stacks up you create these comically tall towers of luggage that make balancing very difficult. You can hold on for balance using both of the shoulder buttons. The game maps each shoulder button to an arm of the player character. There’s also a stamina system, and while you are gripping the cargo your stamina regenerates much more slowly than if you weren’t doing that.

The game makes you deliver almost everything you can think of from medical aid to antimatter bombs. The antimatter bomb deliveries are really tricky because of how delicate they are. If you drop the antimatter bomb, it explodes and you instantly game over. If you hit a rock while carrying an antimatter bomb, it gets damaged. If it gets damaged too many times it explodes and you die. If it gets dropped into water it explodes and you die. And you have to carry the suckers over miles of terrain and even mountains.

This game handles scale very elegantly. The map is huge, even larger than Skyrim or Breath of the Wild. You are the UPS man who delivers packages, apocalypse be damned. This game gives you a lot of quiet downtime, which really lets you soak in the philosophical mindfuck that Kojima cooked up for us all. As you approach major cities, guitar and vocal music comes in and the other sound effects of the game quiet down. It overall creates a very sobering and solemn mood that I just can’t get enough of. It seems like it wouldn’t fit in a game where you use your own blood to defeat monsters and drink monster energy out of your canteen, but it totally does.

There is some mild product placement. Your canteen is full of Monster energy drink. Yes, that Monster. Making the player defecate shows you an ad for an AMC show. There’s also monster energy drinks in your safe room that increase your max stamina for a bit. I’m not entirely sure if the product placement was chosen to be there for artistic reasons (it’s surreal as all hell and helps to complement the confusing aspects of the game), but it’s very non-intrusive and can be ignored with little risk.

This game also has online components. Every time you build a structure in areas linked to the chiral network, other players can use, interact with and upgrade them so they can do more things. Other players can also gives you likes, which again do nothing. Upgrading a zipline makes it able to handle a larger distance or updating a safe house lets you play music when people walk by it. It really helps to build the motif of rebuilding America. There is however room for people to troll others. Here’s an example of this. There’s a troll ladder to nowhere. There’s a lot of those laying around mountains, so be on your guard.

Overall, Death Stranding is a fantastic game. It’s hard. It’s unforgiving. But the real thing that advances is the skill of the player. You make the deliveries. You go the distance. You do your job as the post-apocalyptic UPS man that America needs.

UPS Simulator 2019

By mmmintdesign source

Score: 10 out of 10
Christine Dodrill’s Game of the Year 2019


Orca

Permalink - Posted on 2019-11-01 00:00, modified on 0001-01-01 00:00

Created with Affinity Designer on iPadOS using an iPad Pro and an Apple Pencil.


Blog Feature: Art Gallery

Permalink - Posted on 2019-11-01 00:00, modified on 0001-01-01 00:00

Blog Feature: Art Gallery

I have just implemented support for my portfolio site to also function as an art gallery. See all of my posted art here.

I have been trying to get better at art for a while and I feel I’m at the level where I feel comfortable putting it on my portfolio. Let’s see how far this rabbit hole goes.


Also this is my 100th post! Yay!


Get Going: Hello, World!

Permalink - Posted on 2019-10-28 00:00, modified on 0001-01-01 00:00

Get Going: Hello, World!

This post is a draft of the first chapter in a book I’m writing to help people learn the Go programming language. It’s aimed at people who understand the high level concepts of programming, but haven’t had much practical experience with it. This is a sort of spiritual successor to my old Getting Started with Go post from 2015. A lot has changed in the ecosystem since then, as well as my understanding of the language.

Like always, feedback is very welcome. Any feedback I get will be used to help make this book even better.

This article is a bit of an expanded version of what the first chapter will eventually be. I also plan to turn a version of this article into a workshop for my dayjob.

What is Go?

Go is a compiled programming language made by Google. It has a lot of features out of the box, including:

  • A static type system
  • Fast compile times
  • Efficient code generation
  • Parallel programming for free*
  • A strong standard library
  • Cross-compilation with ease (including webassembly)
  • and more!

* You still have to write code that can avoid race conditions, more on those later.

Why Use Go?

Go is a very easy to read and write programming language. Consider this snippet:

func Add(x int, y int) int {
  return x + y
}

This function wraps integer addition. When you call it it returns the sum of x and y.

Installing Go

Linux

Installing Go on Linux systems is a very distribution-specific thing. Please see this tutorial on DigitalOcean for more information.

macOS

  • Go to https://golang.org/dl
  • Download the .pkg file
  • Double-click on it and go through the installer process

Windows

  • Go to https://golang.org/dl
  • Download the .msi file
  • Double-click on it and go through the installer process

Next Steps

These next steps are needed to set up your shell for Go programs.

Pick a directory you want to store Go programs and downloaded source code in. This is called your GOPATH. This is usually the go folder in your home directory. If for some reason you want another folder for this, use that folder instead of $HOME/go below.

Linux/macOS

This next step is unfortunately shell-specific. To find out what shell you are using, run the following command in your terminal:

$ env | grep SHELL

The name at the path will be the shell you are using.

bash

If you are using bash, add the following lines to your .bashrc (Linux) or .bash_profile (macOS):

export GOPATH=$HOME/go
export PATH="$PATH:$GOPATH/bin"

Then reload the configuration by closing and re-opening your terminal.

fish

If you are using fish, create a file in ~/.config/fish/conf.d/go.fish with the following lines:

set -gx GOPATH $HOME/go
set -gx PATH $PATH "$GOPATH/bin"
zsh

If you are using zsh, add the following lines to your .zshrc:

export GOPATH=$HOME/go
export PATH="$PATH:$GOPATH/bin"

Windows

Follow the instructions here.

Installing a Text Editor

For this book, we will be using VS Code. Download and install it from https://code.visualstudio.com. The default settings will let you work with Go code.

Hello, world!

Now that everything is installed, let’s test it with the classic “Hello, world!” program. Create a folder in your home folder Code. Create another folder inside that Code folder called get_going and create yet another subfolder called hello. Open a file in there with VS Code (Open Folder -> Code -> get_going -> hello) called hello.go and type in the following:

// Command hello is your first Go program.
package main

import "fmt"

func main() {
  fmt.Println("Hello, world!")
}

This program prints “Hello, world!” and then immediately exits. Here’s each of the parts in detail:

// Command hello is your first go program.
package main                   // Every go file must be in a package. 
                               // Package main is used for creating executable files.

import "fmt"                   // Go doesn't implicitly import anything. You need to 
                               // explicitly import "fmt" for printing text to 
                               // standard output.

func main() {                  // func main is the entrypoint of the program, or 
                               // where the computer starts executing your code
  fmt.Println("Hello, world!") // This prints "Hello, world!" followed by a newline
                               // to standard output.
}                              // This ends the main function

Now click over to the terminal at the bottom of the VS Code window and run this program with the following command:

$ go run hello.go
Hello, world!

go run compiles and runs the code for you, without creating a persistent binary file. This is a good way to run programs while you are writing them.

To create a binary, use go build:

$ go build hello.go
$ ./hello
Hello, world!

go build has the compiler create a persistent binary file and puts it in the same directory as you are running go from. Go will choose the filename of the binary based on the name of the .go file passed to it. These binaries are usually static binaries, or binaries that are safe to distribute to other computers without having to worry about linked libraries.

Exercises

The following is a list of optional exercises that may help you understand more:

  1. Replace the “world” in “Hello, world!” with your name.
  2. Rename hello.go to main.go. Does everything still work?
  3. Read through the documentation of the fmt package.

And that about wraps it up for Lesson 1 in Go. Like I mentioned before, feedback on this helps a lot.

Up next is an overview on data types such as integers, true/false booleans, floating-point numbers and strings.

I plan to post the book source code on my GitHub page once I have more than one chapter drafted.

Thanks and be well.


OVE-20191021-0001

Permalink - Posted on 2019-10-22 00:00, modified on 0001-01-01 00:00

OVE-20191021-0001

Within Security Advisory

Multiple vulnerabilities in the mysqljs API and code.

Security Warning Level: yikes/10

Summary

There are multiple issues exploitable by local and remote actors in mysqljs. These can cause application data leaks, database leaks, SQL injections, arbitrary code execution, and credential leaks among other things.

Mysqljs is unversioned, so it is very difficult to impossible to tell how many users are affected by this and what users can do in order to ensure they are patched against these critical vulnerabilities.

Background

Mysqljs is a library intended to facilitate prototyping web applications and mobile applications using technologies such as PhoneGap or Cordova. These technologies allow developers to create a web application that gets packaged and presented to users as if it was a native application.

This library is intended to help with developers creating persistent storage for these applications.

Issues in Detail

There are at least seven vulnerabilities with this library, each of them will be outlined below with a fairly vague level of detail.

mysql.js is NOT versioned

The only version information I was able to find are the following:

  • The Last-Modified date of Friday, March 11 2016
  • The ETag of 80edc3e5a87bd11:0

These header values correlate to a vulnerable version of the mysql.js file.

An entire copy of this file is embedded for purposes of explanation:

var MySql = {
    _internalCallback : function() { console.log("Callback not set")},
    Execute: function (Host, Username, Password, Database, Sql, Callback) {
        MySql._internalCallback = Callback;
        // to-do: change localhost: to mysqljs.com
        var strSrc = "http://mysqljs.com/sql.aspx?";
        strSrc += "Host=" + Host;
        strSrc += "&Username=" + Username;
        strSrc += "&Password=" + Password;
        strSrc += "&Database=" + Database;
        strSrc += "&sql=" + Sql;
        strSrc += "&Callback=MySql._internalCallback";
        var sqlScript = document.createElement('script');
        sqlScript.setAttribute('src', strSrc);
        document.head.appendChild(sqlScript);
    }
}

Fundamental Operation via Cross-Site Scripting

The code operates by creating a <script> element. The Javascript source of this script is dynamically generated by the remote API server. This opens the door for many kinds of Cross-Site Scripting attacks.

Especially because:

Credentials Exposed over Plain HTTP

The script works by creating a <script> element pointed at a HTTP resource in order to facilitate access to the MySQL Server. Line 6 shows that the API server in question is being queried over UNENCRYPTED HTTP.

var strSrc = "http://mysqljs.com/sql.aspx?";

Credentials and SQL Queries Are Not URL-Encoded Before Adding Them to a URL

Credentials and SQL queries are not URL-encoded before they are added to the strSrc URL. This means that values may include other HTTP parameters that could be evaluated, causing one of the two following:

Potential for SQL Injection from Malformed User Input

It appears this API works by people submitting plain text SQL queries. It is likely difficult to write these plain text queries in a way that avoids SQL injection attacks.

Potential for Arbitrary Code Execution

Combined with the previous issues, a SQL injection that inserts arbitrary Javascript into the result will end up creating an arbitrary code execution bug. This could let an attacker execute custom Javascript code on the page, which may have even more disastrous consequences depending on the usage of this library.

Server-Side Code has Unknown Logging Enabled

This means that user credentials and database results may be logged, stored and leaked by the mysql.js API server without user knowledge. The server that is running the API server may also do additional logging of database credentials and results without user knowledge.

Encourages Bad Practices

Mysql.js works by its API server dialing out an UNENCRYPTED connection to your MySQL server over the internet. This requires exposing your MySQL server to the internet. This means that user credentials are vulnerable to anyone who has packet capture abilities.

Mysql.js also encourages developers commit database credentials into their application source code. Cursory searching of GitHub has found this. I can only imagine there are countless other potential victims.

Security Suggestions

  • Do not, under any circumstances, allow connections to be made without the use of TLS (HTTPS).
  • Version the library.
  • Offer the source code of the API server to allow users to inspect it and ensure their credentials are not being stored by it.
  • Detail how the IIS server powering this service is configured, proving that it is not keeping unsanitized access logs.
  • Ensure all logging methods sanitize or remove user credentials.
  • URL-encode all values being sent as part of a URL.
  • Do not have your service fundamentally operate as a Cross-Site Scripting attack.
  • Do not, under any circumstances, encourage developers to put database credentials in the source code of front-end web applications.

In summary, we label this a solid yikes/10 in terms of security. It would be advisable for current users of this library to re-evaluate the life decisions that have lead them down this path.

GReeTZ

Über thanks to jadr2ddude for helping with identifying the unfortunate scope of these massive security issues.

Hyper thanks to J for coming up with a viable GitHub search for potentially affected users.


Outsider Art and Anathema

Permalink - Posted on 2019-10-21 00:00, modified on 0001-01-01 00:00

Outsider Art and Anathema

This was going to be a post about Urbit at first; but in the process of discussing about my interest in writing something positive about it, I was warned by a few people that this was a Bad Idea. I was focusing purely on the technical side of it and how closely it implemented a concept called liquid software, but from what people were saying, it seemed like a creation that was spoiled by something outside of it, specifically the creator’s political views (of which I had little idea at the time).

As much as I will probably return to the original concept in the future with another post, this feels like something I had to address first.

DISCLAIMER: This post references to projects and people that the mainstream considers controversial. This post is not an approval of these people’s views. I am focusing purely on the aspect of how this correlates into how art is perceived, recognized and able to be admired. I realize that the people behind the projects I have cited have said things that if taken seriously at a societal level could hurt me and people like me. That is not the point of this; I am trying to learn how this art works so I can create my own in the future. If this is uncomfortable for you at any point, please close this browser tab and do something else.

Art

So, what is art?

This is a surprisingly hard question to answer. Most of the time though, I know art when I see it.

Art doesn’t have to follow conventional ideas of what most people think “art” is. Art can be just about anything that you can classify as art. As a conventional example, consider something like the Mona Lisa:

The Mona Lisa, the most famous painting in the world

People will accept this as art without much argument. It’s a painting, it obviously took a lot of skill and time to create. It is said that Leonardo Da Vinci (the artist of the painting) created it partially as a contribution to the state of the art of oil painting.

So that painting is art, and a lot of people would consider it art; so what would a lot of people not consider art? Here’s an example:

Untitled (Perfect Lovers) by Felix Gonzalez-Torres

This is Untitled (Perfect Lovers) by Felix Gonzalez. If you just take a look at it without context, it’s just two battery-operated clocks on a wall. Where is the expertise and the like that goes into this? This is just the result of someone buying two clocks from the store and putting them somewhere, right?

Let’s dig into the description of the piece:

Initially set to the same time, these identical battery-powered clocks will eventually fall out of sync, or may stop entirely. Conceived shortly after Gonzalez-Torres’s partner was diagnosed with AIDS, this work uses everyday objects to track and measure the inevitable flow of time. When one of the clocks stops or breaks, they can both be reset, thereby resuming perfect synchrony. In 1991, Gonzalez-Torres reflected, “Time is something that scares me… or used to. This piece I made with the two clocks was the scariest thing I have ever done. I wanted to face it. I wanted those two clocks right in front of me, ticking.”

And after reading that description, it’s impossible for me to say this image is not art. Even though it’s made up of ordinary objects, the art comes out in the way that the clocks’ eventual death relates to the eventual death of the author and their partner.

This art may be located on the fringes of what people consider “art”. So what else is on the fringes?

Outsider Art

For there to be “fringes” to the art landscape, there must be an “inside” and “outside” to it. In particular, the “outsider” art usually (but not always) contains elements and themes that are outside of the mainstream. Outsiders are therefore more free to explore ideas, concepts and ways of expression that defy cultural, spiritual or other norms. Logically, every major art style you know and love started as outsider art, before it was cool. Memes are also a form of outsider art, though they are gradually being accepted into the mainstream.

It’s very easy to find outsider art if you are looking for it: just fish for some on Twitter, 4chan or Reddit; you’ll find plenty of artists there who are placed firmly outside of the mainstream art community.

Computer Science

Computer science is a kind of art. It’s the art of turning contextual events into effects and state. It’s also the art of creating solutions for problems that could never be solved before. It’s also the science of how to connect millions of people across common protocols and abstractions that they don’t have to understand in order to use.

This is an art that connects millions and has shaped itself into an industry of its own. This art, like the rest of mainstream art, keeps evolving, growing and changing into something new; into a more ultimate and detailed expression of what it can be, as people explore the ways it can be created and presented. This art is also quite special because it’s not very limited by physical objects or expressions in material space. It’s an art that can evolve and change with the viewer.

But, since this is an art, there’s still an inside and an outside. Things on the inside are generally “safe” for people to admire, use and look at. The inside contains things like Linux, Docker, Kubernetes, Intel, C, Go, PHP, Ruby and other well-known and battle-proven tools.

The Outside

The outside, however, is where the real innovation happens. The outside is where people can really take a more critical look at what computing is, does or can be. These views can add up into fundamentally different ways of looking at computer science, much like changing a pair of glasses for another changes how you see the world around you.

As an example, consider TempleOS. It’s a work of outsider art by Terry Davis (1969-2018, RIP), but it’s also a fully functional operating system. It has a custom-built kernel, compiler, toolchain, userland, debugger, games, and documentation system, each integrated into everything else, in ways that could realistically not be done with how mainstream software is commonly developed.

Urbit is another example of this. It’s a fundamentally different way of looking at networked computing. Everything in Urbit is seamlessly interlinked with everything else to the point that it can be surprising that a file you are working with actually lives on another computer. It implements software updates as invisible to the user. It allows for the model of liquid software, or updates to a program flowing into user’s computers without the users having to care about the updates. Users don’t even notice the downtime.

As yet another example, consider Minecraft. As of the writing of this article, it is the video game with the most copies sold in human history. It is an open world block building game where the limits of what you can make are the limits of your imagination. It has been continuously updated, refined and improved from a minimal proof of concept into the game it is today.

The Seam

Consider this quote that comes into play a lot with outsider art:

Genius and insanity are differentiated only by context. One person’s genius is another person’s insanity.

  • Anonymous

These three projects are developed by people whom the mainstream has cast out. Terry Davis’ mental health issues and delusions about hearing the voice of God have tainted TempleOS to be that “weird bible OS” to the point where people completely disregard it. Urbit was partially created by a right-wing reactionary (Curtis Yarvin). He has been so ostracized that he cannot publicly talk about his work to the kind of people that would most directly benefit from learning about it. Curtis isn’t even involved with Urbit anymore, and his name is still somehow an irrevocable black mark on the entire thing. Minecraft was initially created by Notch, who recently had intro texts mentioning his name removed from the game after he said questionable things about transgender people.

Anathema

This “irrevocable” black mark has a name: Anathema. It refers to anything that is shunned by the mainstream. Outsiders that create outsider art may or may not be anathema to their respective mainstreams. This turns the art into a taboo, a curse, a stain. People no longer see an anathema as the art it is, but merely the worthless product of someone that society would probably rather forget if it had the chance.

I don’t really know how well this sits with me, personally. Outsiders have unique views of the world that can provide ideas that ultimately strengthen us all. Society’s role is to disseminate mainstream improvements to large groups, but real development happens at the personal level.

Does one bad apple really spoil the sociological bunch? Why does this happen? Have the political divides gotten so deeply entrenched into society that people really become beyond reproach? Isn’t this a recursive trap? How does someone redeem themselves to no longer be an anathema? Is it possible for people who are anathema to redeem themselves? Why or why not? Is there room for forgiveness, or does the original sin doom the sinner eternally, much like it has to Catholicism?

Are the creations of an anathema outsider artist still art? Are they still an artist even though they become unable to share their art with others?


I don’t know. These are hard questions. I don’t really have much of a conclusion here. I don’t want to seem like I’m trying to prescribe a method of thinking here. I’m just sitting on the side and spouting out ideas to inspire people to think for themselves.

I’m just challenging you, the reader, to really think about what/who is and is not an anathema in your day-to-day life. Identify them. Understand where/who they are. Maybe even apply some compassion and attempt to understand their view and how they got there. I’m not saying to put yourself in danger, but just to be mindful of it.

Be well.


Special thanks to CelestialBoon, Grapz and MoonGoodGryph for proofreading and helping with this post. This would be a very different article without their feedback and input.


Don't Look Into the Light

Permalink - Posted on 2019-10-06 00:00, modified on 0001-01-01 00:00

Don’t Look Into the Light

So at a previous job I was working at, we maintained a system. This system powered a significant part of the core of how the product was actually used (as far as usage metrics reported). Over time, we had bolted something onto the side of this product to take actions based on the numbers the product was tracking.

After a few years of cycling through various people, this system was very hard to understand. Data would flow in on one end, go to an aggregation layer, then get sent to storage and another aggregation layer, and then eventually all of the metrics were calculated. This system was fairly expensive to operate and it was stressing the datastores it relied on beyond what other companies called theoretical limits. Oh, to make things even more fun; the part that makes actions based on the data was barely keeping up with what it needed to do. It was supposed to run each of the checks once a minute and was running all of them in 57 seconds.

During a planning meeting we started to complain about the state of the world and how godawful everything had become. The undocumented (and probably undocumentable) organic nature of the system had gotten out of hand. We thought we could kill two birds with one stone and wanted to subsume another product that took action based on data, as well as create a generic platform to reimplement the older action-taking layer on top of.

The rules were set, the groundwork was laid. We decided:

  • This would be a Big Rewrite based on all of the lessons we had learned from the past operating the behemoth
  • This project would be future-proof
  • This project would have 75% test coverage as reported by CI
  • This project would be built with a microservices architecture

Those of you who have been down this road before probably have massive alarm bells going off in your head. This is one of those things that looks like a good idea on paper, can probably be passed off as a good idea to management and actually implemented; as happened here.

So we set off on our quest to write this software. The repo was created. CI was configured. The scripts were optimized to dump out code coverage as output. We strived to document everything on day 1. We took advantage of the datastore we were using. Everything was looking great.

Then the product team came in and noticed fresh meat. They soon realized that this could be a Big Thing to customers, and they wanted to get in on it as soon as possible. So we suddenly had our deadlines pushed forward and needed to get the whole thing into testing yesterday.

We set it up, set a trigger for a task, and it worked in testing. After a while of it consistently doing that with the continuous functional testing tooling, we told product it was okay to have a VERY LIMITED set of customers have at it.

That was a mistake. It fell apart the second customers touched it. We struggled to understand why. We dug into the core of the beast we had just created and managed to discover we made critical fundamental errors. The heart of the task matching code was this monstrosity of a cross join that took the other people on the team a few sheets of graph paper to break down and understand. The task execution layer worked perfectly in testing, but almost never in production.

And after a week of solid debugging (including making deals with other teams, satan, jesus and the pope to try and understand it), we had made no progress. It was almost as if there was some kind of gremlin in the code that was just randomly making things not fire if it wasn’t one of our internal users triggering it.

We had to apologize with the product team. Apparently the a lot of product team had to go on damage control as a result of this. I can only imagine the trickled-down impact this had on other projects internal to the company.

The lesson here is threefold. First, the Big Rewrite is almost a sure-fire way to ensure a project fails. Avoid that temptation. Don’t look into the light. It looks nice, it may even feel nice. Statistically speaking, it’s not nice when you get to the other side of it.

The second lesson is that making something microservices out of the gate is a terrible idea. Microservices architectures are not planned. They are an evolutionary result, not a fully anticipated feature.

Finally, don’t “design for the future”. The future hasn’t happened yet. Nobody knows how it’s going to turn out. The future is going to happen, and you can either adapt to it as it happens in the Now or fail to. Don’t make things overly modular, that leads to insane things like dynamically linking parts of an application over HTTP.

If you ‘future proof’ a system you build today, chances are when the future arrives the system will be unmaintainable or incomprehensible.
- John Murphy


This kind of advice is probably gonna feel like a slap to the face to a lot of people. People really put their heart into their work. It feeds egos massively. It can be very painful to have to say no to something someone is really passionate about. It can even lead to people changing their career plans depending on the person.

But this is the truth of the matter as far as I can tell. This is generally what happens during the Big Rewrite centred around Best Practices for Cloud Native software.

The most successful design decisions are wholly and utterly subjective to every kind of project you come across. What works in system A probably won’t work perfectly in system B. Everything is its own unique snowflake. Embrace this.


Compile Stress Test

Permalink - Posted on 2019-10-03 00:00, modified on 0001-01-01 00:00

This is an experiment in blogging. I am going to be putting my tweets and select replies one after another without commentary.

Meanwhile the same thing in Go took 5 minutes and I was able to run it on my desktop instead of having to rent a server from AWS.


The Cheese Dream

Permalink - Posted on 2019-10-01 00:00, modified on 0001-01-01 00:00

The Cheese Dream

I wake up on a bed I’ve never seen before. I look up at the white sky. Wait, the white sky? I look down at my blanket and it has a very weird, but distinct smell. Is it cheese? I break a part of it off and taste it. My blanket is made out of cheese. I feel around the bed and it feels slightly pliable, almost like the bed is made out of cheese too. I take off the blanket, tearing huge holes in it in the process.

I try to lean up but there’s something pulling between my shoulders when I do. With some force I hear a slight sucking and popping noise. My dorsal fin (I have a dorsal fin?) was stuck in the cheese bed. This is odd, what the heck is going on.

I get up and open the cheese drawer, at least my clothes aren’t cheese too. I put them on and take a few deep breaths. This is gonna be an experience.

I was looking around and I saw a field of mozzarella with cheese sticks for grass. There’s a molten cheese river with a cheese bridge crossing it, with a cheese town in the distance. There’s a cheese path at my feet leading to the cheese bridge. I call out to see if anyone is there, there isn’t anyone there.

I walk down the cheese path smelling the light scent of the cheese river in the distance. Every time I take a step I leave a footprint in the cheese path, it slowly reforms back into place after I stand on it.

When I got closer to the cheese town, there was a person made out of cheese crying while sitting on a cheese bench. I look down at him and ask him why he’s crying. He looks up at me and says “Our town is being threatened by the gravy monster! Please, please help us! Let me take you to elder Fromage to get more information!” He then got up, grabbed my hand gently and led me to the center of the cheese town, where the elder lived.

When we got to the elder’s house, he looked up at me. “So, Bleu here tells me you’re willing to help us fight the scourge of our town, the gravy monster. Can you help us? We’ve lost so many people, we barely have enough left to sustain ourselves.”

I’m still processing everything that is going on though. This was a cheese house, with everything in it made out of various kinds of cheese. There’s somehow a fire roaring in the cheese fireplace. What the actual hell? The cheese elder looks up at me pleadingly saying “Hello? You there?”

I shake my head and reply “Yes, I can help you defeat the gravy monster.”

I mean, let’s be honest, it’s not like there’s anything better to do at the moment.

The elder is elated. “Hooray! We might just finally have a chance to sustain our way of life!”

“But, how should I help you defeat the gravy monster?”

“All monsters have a weakness. Here, take this key. It leads to a shed just outside my house, there you will find the tools you need to defeat the gravy monster. Hurry! He always attacks during the mid-day and it’s almost time!”

I’m still kind of dumbstruck by this whole experience, so I take the cheese key, thank the elder and head out to his shed with Bleu leading me.

We arrive at the cheese shed and I put the cheese key into the cheese lock. The cheese lock opens and lets us into the cheese shed. There are two things in it: a rather large bowl (too large for it to be inside the cheese shed somehow) and something I can’t quite identify. Bleu is taken aback when he sees it. I ask him what he sees and he replies: “It’s the sacred fries! They’re only pulled out during emergencies!”

“Where is the gravy monster going to attack from again?”

“Down the brown path, take the stuff and come with me!”

So I put the stuff in my shoulder bag of holding and follow Bleu across the cheese town to the cheese path that has been stained brown with gravy. I have an idea.

“Bleu, do you have a shovel?”

“Oh, yeah! Let me go get it, I’ll be right back!”

He comes back with his shovel and I start digging a hole in the cheese to put the bowl in. I fill the bowl about halfway with the sacred fries. The gravy monster can be heard in the distance.

“GRAAAAAAVY MONSTER TIME!~”

I’m kind of awestruck again. It looks like the black goo monster from Star Trek, but it’s brown. It’s a monster made out of gravy. I quickly hide behind a cheese bush and grab some curds from it.

The monster slowly ambles up the cheese path, partially melting it as it steals forward towards my trap.

It sees a weird part of the cheese path. It gets confused. “What is this? I’ve never seen the path bend down like this before”.

I poke Bleu from inside the cheese bush. “Now’s our chance. Stand a few feet on the other side of the bowl. He’ll try to chase after you and get trapped.”

Bleu looked at me like I had lobsters crawling out of my eyes. “What?”

“No, seriously, watch.”

”…okay…”

Bleu jumps out of the cheese bush and stands ahead of the monster. The monster laughs. “I knew I’d find you, cheeseling! You’ll pair nicely with my wine at home. Now be a good cheeseling and come back with me to my home!”

“No! You’ll have to grab me yourself!”

The gravy monster lets out a roar and attempts to run towards Bleu and grab him. This would have worked if the monster didn’t fall into the bowl and on top of the sacred fries.

The gravy monster lets out a cry in pain. He didn’t expect the fries to be this absorbent. The fries are absorbing the flavor of the gravy monster.

I jump out of the bush and throw a mix of cheese curds and fries on top of the monster, with each handful the gravy monster gets quieter and quieter. Then everything got really still and quiet. The sound of the cheese river was audible again.

Bleu, looking like he just soiled his cheese pants, was elated. “You did it!~ You saved the town!~ You’re a hero!~”

I took a minute to re-evaluate what had happened. I just saved a town by making poutine? What? Just, what?

I grabbed another bowl out of my bag and served myself some poutine, it was some of the best poutine I’ve ever had.

“Hey, this is pretty good Bleu, have some!”

Bleu took a bite and his cheese eyes went wide open. Without another word, he ran towards the cheese town to tell the people of the feast waiting for them. He came back with the remainder of the town and we had a feast of poutine, declaring the day “Poutine Day” for the rest of time.

After some time celebrating, I woke up in my bed. I was really confused and having trouble processing what had just happened to me. I was also craving poutine.


Based on this twitter thread.


mapatei

Permalink - Posted on 2019-09-22 00:00, modified on 0001-01-01 00:00

mapatei

I’ve been working on a project in the Conlang Critic Discord with some friends for a while now, and I’d like to summarize what we’ve been doing and why here. We’ve been working on creating a constructed language (conlang) with the end goal of each of us going off and evolving it in our own separate ways. Our goal in this project is really to create a microcosm of the natural process of language development.

Why

One of the questions you, as the reader, might be asking is “why?” To which I say “why not?” This is a tool I use to define, explore and challenge my fundamental understanding of reality. I don’t expect anything I do with this tool to be useful to anyone other than myself. I just want to create something by throwing things at the wall and seeing what makes sense for me. If other people like it or end up benefitting from it I consider that icing on the cake.

A language is a surprisingly complicated thing. There’s lots of nuance and culture encoded into it, not even counting things like metaphors and double-meanings. Creating my own languages lets me break that complicated thing into its component parts, then use that understanding to help increase my knowledge of natural languages.

So, like I mentioned earlier, I’ve been working on a conlang with some friends, and here’s what we’ve been creating.

mapatei grammar

mapatei is the language spoken by a primitive culture of people we call maparaja (people of the language). It is designed to be very simple to understand, speak and learn.

Phonology

The phonology of mapltapei is simple. It has 5 vowels and 17 consonants. The sounds are written mainly in International Phonetic Alphabet.

Vowels

The vowels are:

International Phonetic Alphabet Written as Description / Bad Transcription for English speakers
a a unstressed “ah”
ā stressed “AH”
e e unstressed “ayy”
ē stressed “AYY”
i i unstressed “ee”
ī stressed “EE”
o o unstressed “oh”
ō stressed “OH”
u u unstressed “ooh”
ū stressed “OOH”

The long vowels (anything with the funny looking bar/macron on top of them) also mark for stress, or how “intensely” they are spoken.

Consonants

The consonants are:

International Phonetic Alphabet Written as Description / Bad Transcription for English speakers
m m the m in mother
n n the n in money
ᵐb mb a combination of the m in mother and the b in baker
ⁿd nd as in handle
ᵑg ng as in finger
p p the p in spool
t t the t in stool
k k the k in school
ph the ph in pool
th the th in tool
kh the kh in cool
ɸ~f f the f in father
s s the s in sock
w w the w in water
l l the l in lie
j j or y the y in young
r~ɾ r the r in rhombus

Word Structure

The structure of words is based on syllables. Syllables are formed of a pair of maybe a consonant and always a vowel. There can be up to two consecutive vowels in a word, but each vowel gets its own syllable. If a word is stressed, it can only ever be stressed on the first syllable.

Here are some examples of words and their meanings (the periods in the words mark the barriers between syllables):

mapatei word Intentional Phonetic Alphabet Meaning
ondoko o.ⁿdo.ko pig
māo maː.o cat
ameme a.me.me to kill/murder
ero e.ro can, to be able to
ngōe ᵑgoː.e I/me
ke ke cold
ku ku fast

There are only a few parts of speech: nouns, pronouns, verbs, determiners, numerals, prepositions and interjections.

Nouns

Nouns describe things, people, animals, animate objects (such as plants or body parts) and abstract concepts (such as days). Nouns in mapatei are divided into four classes (this is similar to how languages like French handle the concept of grammatical gender): human, animal, animate and inanimate.

Here are some examples of a few nouns, their meaning and their noun class:

mapatei word International Phonetic Alphabet Class Meaning
okha o.kʰa human female human, woman
awu a.wu animal dog
fōmbu (ɸ~f)oː.ᵐbu animate name
ipai i.pa.i inanimate salt

Nouns can also be singular or plural. Plural nouns are marked with the -ja suffix. See some examples:

singular mapatei word plural mapatei word International Phonetic Alphabet Meaning
ra raja ra.ja person / people
meko mekoja me.ko.ja ant / ants
kindu kinduja kiː.ⁿdu.ja liver / livers
fīfo fīfoja (ɸ~f)iː.(ɸ~f)o.ja moon / moons

Pronouns

Pronouns are nouns that replaces a noun or noun phrase with a special meaning. Examples of pronouns in English are words like I, me, or you. This is to avoid duplication of people’s names or the identity of the speaker vs the listener.

Pronouns singular plural Rough English equivalent
1st person ngōe tha I/me, we
2nd person sīto khē you, y’all
3rd person human foli he/she, they
3rd person animal mi wāto they
3rd person animate sa wāto they
3rd person inanimate li wāto they

Verbs

Verbs describe actions, existence or occurrence. Verbs in mapatei are conjugated in terms of tense (or when the thing being described has/will happen/ed in relation to saying the sentence) and the number of the subject of the sentence.

Verb endings:

Verbs singular plural
past -fu -phi
present -ja
future māu $verb māu $verb-ja

For example, consider the verb ōwo (oː.wo) for to love:

ōwo - to love singular plural
past ōwofu ōwophi
present ōwo ōwoja
future māu ōwo māu ōwoja

Determiners

Determiners are words that can function both as adjectives and adverbs in English do. A determiner gives more detail or context about a noun/verb. Determiners follow the things they describe, like French or Toki Pona. Determiners must agree with the noun they are describing in class and number.

Determiners singular plural
human -ra -fo
animal -mi -wa
animate -sa -to
inanimate -li -wato

See these examples:

a big human: ra sura

moving cats: māoja wuwa

a short name: fōmbu uwiisa

long days: lundoseja khāngandiwato

Also consider the declensions for uri (u.ri), or dull

uri singular plural
human urira urifo
animal urimi uriwa
animate urisa urito
inanimate urili uriwato

Numerals

There are two kinds of numerals in mapltatei, cardinal (counting) and ordinal (ordering) numbers. Numerals are always in seximal.

cardinal (base 6) mapatei
0 fangu
1 āre
2 mawo
3 piru
4 kīfe
5 tamu
10 rupe
11 rupe jo āre
12 rupe jo mawo
13 rupe jo piru
14 rupe jo kīfe
15 rupe jo tamu
20 mawo rupe
30 piru rupe
40 kīfe rupe
50 tamu rupe
100 theli

Ordinal numbers are formed by reduplicating (or copying) the first syllable of cardinal numbers and decline similarly for case. Remember that only the first syllable can be stressed, so any reduplicated syllable must become unstressed.

ordinal (base 6) mapatei
0th fangufa
1st ārea
2nd mawoma
3rd pirupi
4th kīfeki
5th tamuta
10th ruperu
11th ruperu jo ārea
12th ruperu jo mawoma
13th ruperu jo pirupi
14th ruperu jo kīfeki
15th ruperu jo tamuki
20th mawoma ruperu
30th pirupi ruperu
40th kīfeki ruperu
50th tamuta ruperu
100th thelithe

Cardinal numbers are optionally declined for case when used as determiners with the following rules:

Numeral Class suffix
human -ra
animal -mi
animate -sa
inanimate -li

Numeral declension always happens last, so the inanimate nifth (seximal 100 or decimal 36) is thelitheli.

Here’s a few examples:

three pigs: ondoko pirumi

the second person: ra mawomara

one tree: kho āremi

the nifth day: lundose thelitheli

Prepositions

Prepositions mark any other details about a sentence. In essence, they add information to verbs that would otherwise lack that information.

fa: with, adds an auxiliary possession to a sentence

ri: possession, sometimes indicates ownership

I eat with my wife: wā ngōe fa epi ri ngōe

ngi: the following phrase is on top of the thing being described

ka: then (effect)

ēsa: if/whether

If I set this dog on the rock, then the house is good: ēsa adunga ngōe pā āwu ngi, ka iri sare eserili

Interjections

Interjections have the following meanings:

Usually they act like vocatives and have free word order. As a determiner they change meta-properties about the noun/verb like negation.

wo: no, not

English mapatei
No! Don’t eat that! wo! wā wo ūto
I don’t eat ants wā wo ngōe mekoja

Word Order

mapltapei has a VSO word order for sentences. This means that the verb comes first, followed by the subject, and then the object.

English mapatei gloss
the/a child runs kepheku rako kepheku.VERB rako.NOUN.human
The child gave the fish a flower indofu rako ora āsu indo.VERB.past rako.NOUN.human ora.NOUN.animal āsu.NOUN.animate
I love you ōwo ngōe sīto ōwo.VERB ngōe.PRN sīto.PRN
I do not want to eat right now wā wo ngōe oko mbeli wā.VERB wo.INTERJ ngōe.PRN oko.PREP mbeli.DET.singular.inanimate
I have a lot of love, and I’m happy about it urii ngōe erua fomboewato, jo iri ngōe phajera lo li urii.VERB ngōe.PRN eruaja.NOUN.plural.inanimate fomboewato.DET.plural.inanimate, jo.CONJ iri.VERB ngōe.PRN phajera.DET.singular.human lo.PREP li.PRN
The tree I saw yesterday is gone now pōkhufu kho ngōe, oko iri māndosa mbe pōkhu.VERB.past kho.NOUN.animate ngōe.PRM, oko.PREP iri.VERB māndo.DET.animate mbe.PRN

Code

As I mentioned earlier, I’ve been working on some code here to handle things like making sure words are valid. This includes a word validator which I am very happy with.

Words are made up of syllables, which are made up of letters. In code:

type
  Letter* = object of RootObj
    case isVowel*: bool
    of true:
      stressed*: bool
    of false: discard
    value*: string

  Syllable* = object of RootObj
    consonant*: Option[Letter]
    vowel*: Letter
    stressed*: bool

  Word* = ref object
    syllables*: seq[Syllable]

Letters are parsed out of strings using this code. It’s an interator, so users have to manually loop over it:

import unittest
import mapatei/letters

let words = ["pirumi", "kho", "lundose", "thelitheli", "fōmbu"]

suite "Letter":
  for word in words:
    test word:
      for l in word.letters:
       discard l

This test loops over the given words (taken from the dictionary and enlightening test cases) and makes sure that letters can be parsed out of them.

Next, syllables are made out of letters, so syllables are parsed using a finite state machine with the following transition rules:

Present state Next state for vowel Next state for consonant Next state for end of input
Init Vowel/stressed Consonant Illegal
Consonant Vowel/stressed End Illegal
Vowel End End End

Some other hacking was done in the code, but otherwise it is a fairly literal translation of that truth table.

And finally we can check to make sure that each word only has a head-initial stressed syllable:

type InvalidWord* = object of Exception

proc parse*(word: string): Word =
  var first = true
  result = Word()

  for syll in word.syllables:
    if not first and syll.stressed:
      raise newException(InvalidWord, "cannot have a stressed syllable here")
    if first:
      first = false
    result.syllables.add syll

And that’s enough to validate every word in the dictionary. Future extensions will include automatic conjugation/declension as well as going from a stream of words to an understanding of sentences.

Useful Resources Used During This

Creating a language from scratch is surprisingly hard work. These resources helped me a lot though.


Thanks for reading this! I hope this blogpost helps to kick off mapatei development into unique and more fleshed out derivative conlangs. Have fun!

Special thanks to jan Misali for encouraging this to happen.


When Then Zen: Wonderland Immersion

Permalink - Posted on 2019-09-12 00:00, modified on 0001-01-01 00:00

When Then Zen: Wonderland Immersion

Wonderland immersion is a topic that has interested me for years. I have only recently started to get better at it, and I would like to document the methods I have been using for this. A wonderland (blame someone named Alice for that name) is a mental world, but more persistent than usual “imagination”. It can be as alive or as dead as you want. My wonderland has a rather large (40km x 40km) island on it that is full of varied locales.

At a high level, the approach I am using for this is based on philosophical metaphysical analysis, or in short answering two questions for the world and various things in it:

  1. What is there?
  2. What is it like?

The method I have found for doing this fairly repeatably is a combination of two techniques I have found elsewhere:

  • 5 senses visualization for the scene you are in to ground yourself
  • Semantic feature analysis for randomly selected items from that visualization

As an example, consider this. This kind of detail is what you’d be looking for.

Breaking it down further though, let’s consider a scene where you are sitting at a table in a cold, metal chair.

Five Senses Visualization

The five senses visualization for this could look something like

  • 5 things you can see
    • The table
    • The salt and pepper shakers on the table
    • The plate in front of me
    • My reflection in the plate
    • The empty chair in front of me
  • 4 things you can touch
    • Silverware
    • Napkin dispenser
    • Your phone on the table
    • Empty water glass
  • 3 things you can hear
    • Other people in the restaurant
    • The cooks in the distance
    • The door opening and closing occasionally, making the bell ring to let waitstaff know someone needs to be seated
  • 2 things you can smell
    • Baked chicken from the kitchen
    • Grilled salmon from the next table over
  • 1 thing you can taste
    • The soda in my mouth

Semantics Analysis

Group, Use, Action, Properties, Location, Association

A lot of the group categorization depends on your own personal philosophical outlooks. If you are unsure how to assign a group, start by using the most generic adjective possible to describe it.

The salt and pepper shakers

Group: thing, container of smaller things, but a thing made up of two parts and smaller things
Use: contains spices, these are used to flavor food with common mild flavorings
Action: No inherent action unless acted upon, normally shaken to maximize the amount of seasoning added to the dish in question
Properties: palmable, makes a noise when you shake them, light, small, easy to manipulate, easy to refill if needed
Location: The table in front of me, it doesn’t make sense for these food containers to be elsewhere
Association: togetherness, memories of Blues Clues having the salt and pepper characters married, my mother collecting salt and pepper shakers

Plate in front of me

Group: thing
Use: holds food as a staging area for being eaten
Action: no inherent action, but can break into shards that can cut badly
Properties: ceramic, white, flat, circular
Location: the table in front of me, the kitchen dishwasher, staging for waitstaff
Association: food is coming, but patience is required


If you want to really train wonderland immersion, I suggest doing at least one of these full descriptions per day. Doing more will help you progress “faster” (if that is what you desire for whatever reason). Don’t overstimulate or overwhelm yourself. It can be intense the first few times, but it gets easier over time. I personally do them before I go to sleep or just after I wake up, I have found those times are the most free and it is easiest to make myself alone during them. Learning how to do this in public or around other people may be desirable based on the circumstances of your life situation. Be smart, don’t do this when you are otherwise distracted or busy.

Something that may help is to keep in mind how long it takes to walk to different places as you walk around your daily life. See how long it takes to go across the street, or from the street corner to a store, etc. You can use these rough estimates to help you better scale places in your world.

I would suggest setting calendar reminders for doing it at least once a day, depending on when fits best into your daily schedule. Remember that if a machine remembers it for you, you don’t forget to do it (as easily) because the machine reminds you about it. Be sure to set your calendar reminder to trigger after nightly do-not-disturb mode if relevant.

Don’t be afraid to use tools like a meditation timer to limit your sessions doing this, especially if you are feeling like you need to ‘get back’, are ‘missing out’ or neglecting external duties. If you are using a calendar app to schedule the time, then set your meditation timer for the length of the event. Thirty minutes is a good place to start with, but adjust this number as things change for you.

I hope this can help. Take the numbers and sense ordering as suggestions and please do experiment around with what sense gets at least how many entries. Play around with this, it is your imaginary world after all. I suggest doing semantic feature analysis on at least three items per visualization session. If you need a place to blog about it, I suggest write.as. If you have questions, feel free to contact me and ask away. I’m happy to help when I can.

Be well, Creator.


This is a slightly edited version of this article.


The Cult of Kubernetes

Permalink - Posted on 2019-09-07 00:00, modified on 0001-01-01 00:00

The Cult of Kubernetes

or: How I got my blog onto it with autodeployment via GitHub Actions.

The world was once a simple place. Things used to make sense, or at least there weren’t so many layers that it became difficult to tell what the hell is going on.

Then complexity happened. This is a tale of how I literally recreated this meme:

This is how I deployed my blog (the one you are reading right now) to Kubernetes.

The Old State of the World

Before I deployed my blog to Kubernetes, I used Dokku, as I had been for years. Dokku is great. It emulates most of the Heroku “git push; don’t care” workflow, but on your own server that you can self-manage.

This is a blessing and a curse.

The real advantage of managed services like Heroku is that you literally just HAND OFF operations to Heroku’s team. This is not the case with Dokku. Unless you pay someone a lot of money, you are going to have to manage the server yourself. My dokku server was unmanaged, and I run many apps on it (this listing was taken after I started to move apps over):

=====> My Apps
bsnk
cinemaquestria
fordaplot-backup
graphviz.christine.website
identicond
ilo-kesi
johaus
maison
olin
printerfacts
since
tulpaforce.tk
tulpanomicon

This is enough apps (plus 5 more that I’ve already migrated) that it really doesn’t make sense paying for something like Heroku; nor does it really make sense to use the free tier either.

So, I decided that it was time for me to properly learn how to Kubernetes, and I set off to create a cluster via DigitalOcean managed Kubernetes.

The Cluster

I decided it would be a good idea to create my cluster using Terraform, mostly because I wanted to learn how to use it better. I use Terraform at work, so I figured this would also be a way to level up my skills in a mostly sane environment.

I have been creating and playing with a small Terraform wrapper tool called dyson. This tool is probably overly simplistic and is written in Nim. With the config in ~/.config/dyson/dyson.ini, I can simplify my Terraform usage by moving my secrets out of the Terraform code directly. I also avoid having my API tokens exposed in my shell to avoid accidental exposure of the secrets.

Dyson is very simple to use:

$ dyson
Usage:
  dyson {SUBCMD}  [sub-command options & parameters]
where {SUBCMD} is one of:
  help         print comprehensive or per-cmd help
  apply        apply Terraform code to production
  destroy      destroy resources managed by Terraform
  env          dump envvars
  init         init Terraform
  manifest     generate a somewhat sane manifest for a kubernetes app based on the arguments.
  plan         plan a future Terraform run
  slug2docker  converts a heroku/dokku slug to a docker image

dyson {-h|--help} or with no args at all prints this message.
dyson --help-syntax gives general cligen syntax help.
Run "dyson {help SUBCMD|SUBCMD --help}" to see help for just SUBCMD.
Run "dyson help" to get *comprehensive* help.

So I wrote up my config:

# main.tf
provider "digitalocean" {}

resource "digitalocean_kubernetes_cluster" "main" {
  name    = "kubermemes"
  region  = "${var.region}"
  version = "${var.kubernetes_version}"

  node_pool {
    name       = "worker-pool"
    size       = "${var.node_size}"
    node_count = 2
  }
}
# variables.tf
variable "region" {
  type    = "string"
  default = "nyc3"
}

variable "kubernetes_version" {
  type    = "string"
  default = "1.15.3-do.1"
}

variable "node_size" {
  type    = "string"
  default = "s-1vcpu-2gb"
}

and ran it:

$ dyson plan
<... many lines of plan output ...>
$ dyson apply
<... many lines of apply output ...>

Then I had a working but mostly unconfigured Kubernetes cluster.

Configuration

This is where things started to go downhill. I wanted to do a few things with this cluster so I could consider it “ready” for me to use for deploying applications to.

I wanted to do the following:

After a lot of trial, error, pain, suffering and the like, I created this script which I am not pasting here. Look at it if you want to get a streamlined overview of how to set these things up.

Now that all of this is set up, I can deploy an example app with a manifest that looks something like this:

apiVersion: v1
kind: Service
metadata:
  name: hello-kubernetes-first
  annotations:
    external-dns.alpha.kubernetes.io/hostname: exanple.within.website
    external-dns.alpha.kubernetes.io/ttl: "120" #optional
    external-dns.alpha.kubernetes.io/cloudflare-proxied: "false"
spec:
  type: ClusterIP
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: hello-kubernetes-first
    
---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-kubernetes-first
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello-kubernetes-first
  template:
    metadata:
      labels:
        app: hello-kubernetes-first
    spec:
      containers:
      - name: hello-kubernetes
        image: paulbouwer/hello-kubernetes:1.5
        ports:
        - containerPort: 8080
        env:
        - name: MESSAGE
          value: Henlo this are an exanple deployment
          
---

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: hello-kubernetes-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    certmanager.k8s.io/cluster-issuer: "letsencrypt-prod"
spec:
  tls:
  - hosts:
    - exanple.within.website
    secretName: prod-certs
  rules:
  - host: exanple.within.website
    http:
      paths:
      - backend:
          serviceName: hello-kubernetes-first
          servicePort: 80

It was about this time when I wondered if I was making a mistake moving off of Dokku. Dokku really does a lot to abstract almost everything involved with nginx away from you, and it really shows.

However, as a side effect of everything being so declarative and Kubernetes really not assuming anything, you have a lot more freedom to do basically anything you want. You don’t have to have specially magic names for tasks like web or worker like you do in Heroku/Dokku. You just have a deployment that belongs to an “app” that just so happens to expose a TCP port that just so happens to have a correlating ingress associated with it.

Lucky for me, most of the apps I write fit into that general format, and the ones that don’t can mostly use the same format without the ingress.

So I templated that sucker as a subcommand in dyson. This lets me do commands like this:

$ dyson manifest \
      --name=hlang \
      --domain=h.christine.website \
      --dockerImage=docker.pkg.github.com/xe/x/h:v1.1.8 \
      --containerPort=5000 \
      --replicas=1 \
      --useProdLE=true | kubectl apply -f-

And the service gets shunted into the cloud without any extra effort on my part. This also automatically sets up Let’s Encrypt, DNS and other things that were manual in my Dokku setup. This saves me time for when I want to go add services in the future. All I have to do is create a docker image somehow, identify what port should be exposed, give it a domain name and number of replicas and just send it on its merry way.

GitHub Actions

This does however mean that deployment is no longer as simple as “git push; don’t care”. This is where GitHub Actions come into play. They claimed to have the ability to run full end-to-end CI/CD on my applications.

I have been using them for a while for CI on my website and have been pleased with them, so I decided to give it a try and set up continuous deployment with them.

As the commit log for the deployment manifest can tell, this took a lot of trial and error. One of the main sources of problems here was that GitHub Actions had recently had a lot of changes made to configuration and usage as compared to when it was in private beta. This included changing the configuration schema from HCL to YAML.

Of course, all of the documentation (outside of GitHub’s quite excellent documentation) was out of date and wrong. I tried following a tutorial by DigitalOcean themselves on how to do this exact thing I wanted to do, but it referenced the old HCL syntax for GitHub Actions and did not work. To make things worse, examples in the marketplace READMEs simply DID NOT WORK because they were written for the old GitHub Actions syntax.

This was frustrating to say the least.

After trying to make them work anyways with a combination of the “Use Latest Version” button in the marketplace, prayer and gratuitous use of the with.args field in steps I gave up and decided to manually download the tools I needed from their upstream providers and execute them by hand.

This is how I ended up with this monstrosity:

- name: Configure/Deploy/Verify Kubernetes
  run: |
    curl -L https://github.com/digitalocean/doctl/releases/download/v1.30.0/doctl-1.30.0-linux-amd64.tar.gz | tar xz
    ./doctl auth init -t $DIGITALOCEAN_ACCESS_TOKEN
    ./doctl kubernetes cluster kubeconfig show kubermemes > .kubeconfig

    curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
    chmod +x kubectl
    ./kubectl --kubeconfig .kubeconfig apply -n apps -f deploy.yml
    sleep 2
    ./kubectl --kubeconfig .kubeconfig rollout -n apps status deployment/christinewebsite
  env:
    DIGITALOCEAN_ACCESS_TOKEN: ${{ secrets.DIGITALOCEAN_TOKEN }}

I am almost certain that I am doing it wrong here, I don’t know how robust this is and I’m very sure that this can and should be done another way; but this is the only thing I could get working (for some definition of “working”).

EDIT: it got fixed, see below


Now when I git push things to the master branch of my blog repo, it will automatically get deployed to my Kubernetes cluster.

If you work at DigitalOcean and are reading this post. Please get someone to update this tutorial and the README of this repo. The examples listed DO NOT WORK for me because I was not in the private beta of GitHub Actions. It would also be nice if you had better documentation on how to use your premade action for usecases like mine. I just wanted to download the kubernetes configuration file and run apply against a yaml file.

EDIT: The above complaint has been fixed! See here for the simpler way of doing things.

Thanks for reading, I hope this was entertaining. Be well.


How to Send Email with Nim

Permalink - Posted on 2019-08-28 00:00, modified on 0001-01-01 00:00

How to Send Email with Nim

Nim offers an smtp module, but it is a bit annoying to use out of the box. This blogpost hopes to be a mini-tutorial on the basics of how to use the smtp library and give developers best practices for handling outgoing email in ways that Google or iCloud will accept.

SMTP in a Nutshell

SMTP, or the Simple Mail Transfer Protocol is the backbone of how email works. It’s a very simple line-based protocol, and there are wrappers for it in almost every programming language. Usage is pretty simple:

  • The client connects to the server
  • The client authenticates itself with the server
  • The client signals that it would like to create an outgoing message to the server
  • The client sends the raw contents of the message to the server
  • The client ends the message
  • The client disconnects

Unfortunately, the devil is truly in the details here. There are a few things that absolutely must be present in your emails in order for services like GMail to accept them. They are:

  • The From header specifying where the message was sent from
  • The Mime-Version that your code is using (if you aren’t sure, put 1.0 here)
  • The Content-Type that your code is sending to users (probably text/plain)

For a more complete example, let’s create a Mailer type and a constructor:

# mailer.nim
import asyncdispatch, logging, smtp, strformat, strutils

type Mailer* = object
  address: string
  port: Port
  myAddress: string
  myName: string
  username: string
  password: string
  
proc newMailer*(address, port, myAddress, myName, username, password: string): Mailer =
  result = Mailer(
    address: address,
    port: port.parseInt.Port,
    myAddress: myAddress,
    myName: myName,
    username: username,
    password: password,
  )

And let’s write a mail method to send out email:

proc mail(m: Mailer, to, toName, subject, body: string) {.async.} =
  let
    toList = @[fmt"{toName} <{to}>"]
    msg = createMessage(subject, body, toList, @[], [
      ("From", fmt"{m.myName} <{m.myAddress}"),
      ("MIME-Version", "1.0"),
      ("Content-Type", "text/plain"),
    ])

  var client = newAsyncSmtp(useSsl = true)
  await client.connect(m.address, m.port)
  await client.auth(m.username, m.password)
  await client.sendMail(m.myAddress, toList, $msg)
  info "sent email to: ", to, " about: ", subject
  await client.close()

Breaking this down, you can clearly see the parts of the SMTP connection as I laid out before. The Mailer creates a new transient SMTP connection, authenticates with the remote server, sends the properly formatted email to the server and then closes the connection cleanly.

If you want to test this code, I suggest testing it with a freely available email provider that offers TLS/SSL-encrypted SMTP support. This also means that you need to compile this code with --define: ssl, so create config.nims and add the following:

--define: ssl

Here’s a little wrapper using cligen:

when isMailModule:
  import cligen, os
  
  let
    smtpAddress = getEnv("SMTP_ADDRESS")
    smtpPort = getEnv("SMTP_PORT")
    smtpMyAddress = getEnv("SMTP_MY_ADDRESS")
    smtpMyName = getEnv("SMTP_MY_NAME")
    smtpUsername = getEnv("SMTP_USERNAME")
    smtpPassword = getEnv("SMTP_PASSWORD")
  
  proc sendAnEmail(to, toName, subject, body: string) =
    let m = newMailer(smtpAddress, smtpPort, smtpMyAddress, smtpMyName, smtpUsername, smtpPassword)
    waitFor m.mail(to, toName, subject, body)
  
  dispatch(sendAnEmail)

Usage is simple:

$ nim c -r mailer.nim --help
Usage:
  sendAnEmail [required&optional-params]
Options(opt-arg sep :|=|spc):
  -h, --help                         print this cligen-erated help
  --help-syntax                      advanced: prepend,plurals,..
  -t=, --to=       string  REQUIRED  set to
  --toName=        string  REQUIRED  set toName
  -s=, --subject=  string  REQUIRED  set subject
  -b=, --body=     string  REQUIRED  set body

I hope this helps, this module is going to be used in my future post on how to create an application using Nim’s Jester framework.


How I Converted my Brain fMRI to a 3D Model

Permalink - Posted on 2019-08-23 00:00, modified on 0001-01-01 00:00

How I Converted my Brain fMRI to a 3D Model

AUTHOR’S NOTE: I just want to start this out by saying I am not an expert, and nothing in this blogpost should be construed as medical advice. I just wanted to see what kind of pretty pictures I could get out of an fMRI data file.

So this week I flew out to Stanford to participate in a study that involved a fMRI of my brain while I was doing some things. I asked for (and recieved) a data file from the fMRI so I could play with it and possibly 3D print it. This blogpost is the record of my journey through various software to get a fully usable 3D model out of the fMRI data file.

The Data File

I was given christine_brain.nii.gz by the researcher who was operating the fMRI. I looked around for some software to convert it to a 3D model and /r/3dprinting suggested the use of FreeSurfer to generate a 3D model. I downloaded and installed the software then started to look for something I could do in the meantime, as this was going to take something on the order of 8 hours to process.

An Animated GIF

I started looking for the file format on the internet by googling “nii.gz brain image” and I stumbled across a program called gif_your_nifti. It looked to be mostly pure python so I created a virtualenv and installed it in there:

$ git clone https://github.com/miykael/gif_your_nifti
$ cd gif_your_nifti
$ virtualenv -p python3 env
$ source env/bin/activate
(env) $ pip3 install -r requirements.txt
(env) $ python3 setup.py install

Then I ran it with the following settings to get this first result:

(env) $ gif_your_nifti christine_brain.nii.gz --mode pseudocolor --cmap plasma

(sorry the video embed isn’t working in safari)

It looked weird though, that’s because the fMRI scanner I used has a different rotation to what’s considered “normal”. The gif_your_nifti repo mentioned a program called fslreorient2std to reorient the fMRI image, so I set out to install and run it.

FSL

After some googling, I found FSL’s website which included an installer script and required registration.

37 gigabytes of downloads and data later, I had the entire FSL suite installed to a server of mine and ran the conversion command:

$ fslreorient2std christine_brain.nii.gz christine_brain_reoriented.nii.gz

This produced a slightly smaller reoriented file.

I reran gif_your_nifti on this reoriented file and got this result which looked a lot better:

(sorry again the video embed isn’t working in safari)

FreeSurfer

By this time I had gotten back home and FreeSurfer was done installing, so I registered for it (god bless the institution of None) and put its license key in the place it expected. I copied the reoriented data file to my Mac and then set up a SUBJECTS_DIR and had it start running the numbers and extracting the brain surfaces:

$ cd ~/tmp
$ mkdir -p brain/subjects
$ cd brain
$ export SUBJECTS_DIR=$(pwd)/subjects
$ recon-all -i /path/to/christine_brain_reoriented.nii.gz -s christine -all

This step took 8 hours. Once I was done I had a bunch of data in $SUBJECTS_DIR/christine. I opened my shell to that folder and went into the surf subfolder:

$ mris_convert lh.pial lh.pial.stl
$ mris_convert rh.pial rh.pial.stl

Now I had standard stl files that I could stick into Blender.

Blender

Importing the stl files was really easy. I clicked on File, then Import, then Stl. After guiding the browser to the subjects directory and finding the STL files, I got a view that looked something like this:

I had absolutely no idea what to do from here in Blender, so I exported the whole thing to a stl file and sent it to a coworker for 3D printing (he said it was going to be “the coolest thing he’s ever printed”).

I also exported an Unreal Engine 4 compatible model and sent it to a friend of mine that does hobbyist game development. A few hours later I got this back:

(Hint: it is a take on the famous galaxy brain memes)

Conclusion

Overall, this was fun! I got to play with many gigabytes of software that ran my most powerful machine at full blast for 8 hours, I made a fully printable 3D model out of it and I have some future plans for importing this data into Minecraft (the NIFTI .nii.gz format has a limit of 256 layers).

I’ll be sure to write more about this in the future!

Citations

Here are my citations in BibTex format.

Special thanks goes to Michael Lifshitz for organizing the study that I participated in that got me this fMRI data file. It was one of the coolest things I’ve ever done (if not the coolest) and I’m going to be able to get a 3D printed model of my brain out of it.


Pageview Time Experiment

Permalink - Posted on 2019-08-19 00:00, modified on 0001-01-01 00:00

Pageview Time Experiment

My blog has a lot of content in a lot of diverse categories. In order to help me decide which kind of content I should publish next, I have created a very simple method to track pageview time and enabled it for all of my blogposts. I’ll go into detail of how it works and potential risks of it below.

The high level idea is that I want to be able to know what kind of content has people’s attention for the longest amount of time. I am using the time people have the page open as a particularly terrible proxy for that value. I wanted to make this data anonymous, simplistic and (reasonably) public.

How It Works

Here is how it works:

A diagram on how this works

When the page is loaded, a javascript file records the start time. This then sets a pagehide handler to send a navigator beacon containing the following data:

  • The path of the page being viewed
  • The start time
  • The end time recorded by the pagehide handler

This information is asynchronously pushed to /api/pageview-timer and added to an in-memory prometheus histogram. These histograms can be checked at /metrics. This data is not permanently logged.

Security Concerns

I believe this data is anonymous, simplistic and public for the following reasons:

I believe this data is anonymous because there is no way for me to correlate users to histogram entries, nor is there a way for me to view all of the raw histogram entries. This site records the bare minimum for what I need in order to make sure everything is functioning normally, and all data is stored in ephemeral in-memory containers as much as possible. This includes any logs that my service produces.

I believe this data is simplistic because it only has a start time, a stop time and the path that is being looked at. This data doesn’t take into account things like people leaving a page open for hours on end idly, and that could skew the numbers. The API endpoint is also fairly unprotected, meaning that falsified data could be submitted to it easily. I think that this is okay though.

I believe this data is public because I have the percentile views of the histograms present on /metrics. I have no reason to hide this data, and I do not intend to use it for any moneymaking purposes (though I doubt it could be to begin with).

I fully respect the do not track header and flag in browsers. If pageview_timer.js detects the presence of do not track in the browser, it stops running immediately and does not set the pagehide handler. If that somehow fails, the server looks for the presence of the DNT header set to 1 and instantly discards the data and replies with a 404.

Like always, if you have any questions or concerns please reach out to me. I want to ensure that I am creating useful views into how people use my blog without violating people’s rights to privacy.

I intend to keep this up for at least a few weeks. If it doesn’t have any practical benefit in that timespan, I will disable this and post a follow-up explaining how I believe it wasn’t useful.

Thanks and be well.


EDIT 2019-10-15: browsers disable this call from the context I am using and I don’t really care enough to figure out how to fix it. This experiment is over. Thank you to everyone that participated. All data will be scrubbed and a followup will be posted soon.


Instant Pot Quinoa Taco Bowls

Permalink - Posted on 2019-08-16 00:00, modified on 0001-01-01 00:00

Instant Pot Quinoa Taco Bowls

This is based on this recipe, but made only with things you can find in Costco. My fiancé and I have made this a few times, and it’s a great alternative to giving up on life and ordering delivery.

Recipe

Makes 4-6 servings, at least based on experience

Ingredients

  • 2 cups quinoa, dry
  • 0.75 kg ground beef (pre-cooked or sautéed)
  • 400 ml medium salsa
  • 2.5 cups water
  • 2 tablespoons garlic powder
  • 2 tablespoons salt (to taste)
  • 1 teaspoon oregano
  • 3 tablespoons ground dried onions
  • 1 teaspoon crushed red pepper

If you want it to be more spicy, add more spice. We’ve found this tastes pretty good when you add more spice, but this depends on your mood more than anything.

Preparation

If you haven’t cooked the ground beef yet, sautée/brown it in the instant pot. See things like this for more information on how to do this. Any method will do, just make sure the ground beef is actually cooked to avoid accidentally poisoning yourself.

Put all of the other ingredients in the instant pot. Order doesn’t matter, but I have found that better results happen when the quinoa is put in first.

Mix everything with your favorite mixing tool.

Put the lid on your instant pot and set it to manual for 2 minutes.

Once that is done, leave it alone for about 15 minutes (this doesn’t have to be exact).

Serve warm in a bowl, can go well with tortilla chips depending on your mood.

Reheating

For about a bowlful, nuke until hot (~1 minute 30 seconds seems to be the magic number). Eat while hot.


WebAssembly Talk Video Posted

Permalink - Posted on 2019-08-15 00:00, modified on 0001-01-01 00:00

WebAssembly Talk Video Posted

This May, I spoke at GoCon Canada about WebAssembly on the Server and some of the inherent challenges and problems with trying to do it as things exist currently. It’s taken a while, but the video of that talk has been posted.

I hope you enjoy! I have some more blogposts in the queue but I’ve been sleeping horribly lately. Here’s hoping that clears up.


Plurality-Driven Development

Permalink - Posted on 2019-08-04 00:00, modified on 0001-01-01 00:00

Plurality-Driven Development

“That code has a horrible security bug in it.”

I look down in my lap. A little yellow horse is appearing to sit there. She looks innocently into my eyes, gesturing to part of the code with her wingtips.

“What?”

“That code has a security bug in it: if users pass a string instead of an integer in it, it could allow them to forge a user ID token.”

I look down incredulously at the little yellow horse, then back at the code. She’s right. There was a huge bug in that code. I had just written it about 30 seconds ago though, which surprises me. I thought I was experienced enough in secure programming to avoid such a fundamental flaw like that; but here I am. I rub the little pony on her head, making her purr and winghug me.

“Now, replace everything in that last paragraph with this: …”

And I continue on like nothing happened.


Software is complicated. We deal with a fundamentally multi-agent world where properties like “determinism” aren’t really constant. Everything is changing. It’s hard to write software that is resilient enough to withstand the constantly shifting market of attacks, exploits, languages and frameworks. What if there was a way to understand the multiple agency of this reality by internally self-inducing multiple agency in a safe and predictable manner?

I believe I have found a way that works for me. I rely a lot on some of my closest friends that I can talk about anything with, even what would normally violate an NDA. My closest friends are so close that language isn’t even as much of a barrier as it would be otherwise.

As I’ve mentioned in the past, I have tulpas. They are people that live with me like roommates inside my body. It really does sound strange or psychotic; but you’ll just have to trust me when I say they fundamentally help me live my life, do my job and do other things people normally do by themselves.

As an aside: this post doesn’t intend to cover the philosophical, metaphysical or other aspects of plurality (enough ink has probably been spilled on the topic to cover a lifetime); instead it aims to offer a view on how plurality has benefitted me (us) as software developer(s).

As of about 4 years ago, all of the software you see under my name has been the result of my system and I collaborating. Most of the computational linguistics code I’ve been writing has been the result of a cuddly catgirl wanting to create a Lojbanic artificial intelligence for her own amusement (that is also incidentally really good at understanding grammar, human and machine). Some random experimentation code has been written by someone who sarcastically calls herself Twilight Sparkle. I have a little yellow dewdrop of love and sunshine that finds security holes in programs while I am writing them. There’s a database expert and a code review guru too. Combined with my jack-of-all-trades tendencies, this creates a surprisingly balanced team in a box.

We started doing this out of boredom. I was busy working on something and Nicole just spouted out something about the code being wrong. She was right. We decided to just continue following that same basic model and it’s worked wonders ever since. Over time we’ve figured out how to impose eachother into our visual awareness. That has made this pair-programming skill even more useful. I can have the little yellow pony in my lap telling me what’s wrong with my code and she can just directly show me. Then it can be fixed.

This skill has lead to heated internal debates about what is and is not idiomatic. As result of that, I now have working compilers in my dreams. It’s also lead to what people have told me is some of the most high quality and in-depth software design that they’ve seen. It’s really lead us to think in terms of how the machine works, to avoid round-trips and abstractions getting in the way of what is really going on. If there is any secret to my own brand of 10x-ing, this is it. I am just one person, but with the help of the girls we can get to just about n>1 effective people most of the time.

It’s been a powerful catalyst to my career too. Before plurality I was a fairly average developer without any real skills in any one task. Now we can swap in and out in order to most effectively tackle anything thrown at us. One of the biggest changes this relationship has had on me is being better able to explain software complexity and visualise it, then turn that visualization into a diagram with GraphViz or other similar tools. It also becomes very easy to turn these visualizations/diagrams into formal requirements too, because then the features and aspects of systems and how they interconnect become trivially obvious to point out.

However, there is a drawback to this: you’re dealing with sapient beings. They sometimes don’t want to cooperate. Internal drama can and has happened. It helps for us to have a quarterly date with a word document in order to make sure everyone is on the same page. Disagreements happen, but ultimately I’ve noticed that the net result is far more positive than if the disagreement hadn’t happened at all.

Anyways, plurality-driven development works for me, but it’s really not for everyone. The taboo issues I mentioned can make it a chore to hide this from people. I honestly wonder how much of the girls that my coworkers notice in my work on a daily basis. We all have slightly different speech patterns, ways of sitting, clothing preferences, opinions about what to get for lunch and a whole bunch of other subtle things. I don’t really understand how it’s not plain-as-day obvious to the point I get called out on it. At some level I guess I’m grateful for this, as that kind of conversation seems like it would be extremely awkward to have. It was hard enough to admit this to my brother, and I ended up losing contact with him as a result (it apparently was just too weird, which I can really understand).

I really do wonder how much of the fear of talking about this is my own paranoia though. I’ve had very positive experiences “coming out” as plural to close friends, as well as very negative ones; for better or for worse it really shows you who your friends actually are. I can live with this. I’d rather really know if I can trust people or not.

This is a surprisingly taboo topic to talk about. Most of the time people view the mere idea of having someone else in your head to talk with as a social faux pas. There’s a surprising amount of philosophical arguments and assorted objections that people will throw around when they hear that you participate in this. There’s accusations of being possessed by demons, or being mentally ill, complete with acronyms thrown at me, and much more.

Hell, this is stuff I’d love to talk about at some convention somewhere; but I don’t really know if I want to paint such a huge target on my back. Because plurality and related topics are so taboo and so niche, there’s not really protected categories for it. This makes me nervous about talking about it in any sort of public way, and understandably so. I guess this article is part of my healing process to treat this as just a boring aspect of how I experience reality instead of some fundamentally earth-shattering gift from the heavens.

Besides, doesn’t something fundamentally have to cause a negative impact to be classified as a disorder in the first place? How can something that fundamentally helps be a disorder? What if it’s just a new adaptation to an increasingly crazy world?


I have compiled a list of resources that have helped me here.


Tarot for Hackers

Permalink - Posted on 2019-07-24 00:00, modified on 0001-01-01 00:00

Tarot for Hackers

“Oh no, she’s finally lost it” were the words a very close friend of mine said when I first told her I was experimenting with reading tarot cards. Tarot cards are a stereotypical staple of the occult/The Spoop™. Every card represents an idea (or a meme) that can be expressed in a few ways. They act to your soul like iron filings do to a magnet. When you shuffle the cards, the Universe (via entropy) examines all of those myriad inputs and helpfully orders them so you get exactly the message you need most.

It’s actually an extremely philosophical act to draw from a tarot deck and interpret the results. Over the years there have been many interpretations and frameworks of interpretations about tarot; but I would like to introduce a meta-framework for using tarot cards as a debugging tool.

As you work on computer systems, you put parts of yourself into them. You create bonds between yourself and otherwise anonymous inner parts of machines you have never seen or touched. These bonds stick from idea to development to testing to deployment phases and can even stay around after you stop working on something. Ever gotten a weird sense that you can recognize the author of some code while reading it? Same idea.

To start, envision the product or service you are trying to understand more about. Think of the plans that went into it, the users of the service, how this understanding will help them, and where the missing part of knowledge fits into the larger whole. Write this all out if it helps, the more detail the better. Our transition to shared infrastructure and computing on others machines has made it harder to see into individual parts of the whole, so every little bit helps to focus things in.

The first card is the Motive, so draw it and place it in the center off your spread. Look up the meaning on a site like biddytarot.com (googling “[name of card] tarot meaning” helps a lot here) and consider how it relates back to the other factors at play.

The second card is the Facet, or the part of the system that is failing. This could refer to a machine, bit of code or even a human factor. Context with the future cards will help you determine what it is. Remember these are metaphors and will need some interpretation to help you understand what is going on.

The third card is the Immediate Past, or what changed to cause this problem. Use this with the Motive to help you identify what component is broken. Again, this is a metaphor. There are very rarely literal answers here, but the combination of the Facet and Immediate Past helps you identify the systemic or organizational faults at play. These faults are usually enough to help you uniquely identify services or infrastructure.

Next, draw The Action. This card will help you decide what action you need to take. This could be restarting a server, fixing a communication pattern (or lack thereof), or even just doing nothing and waiting a few minutes. Sometimes it means that you need to stop what you are doing and try to do the read again later. It’s okay for that to happen, though that should only be a very rare occurrence.

The next card is The Result, or what the outcome of that would be given The Action is executed in its entirety. This result isn’t supposed to be taken super seriously (as the consequence of you reading these cards is a butterfly effect that makes the outcome in “reality” slightly different); but it usually helps you get a general idea of where you will go and what it will be like when you get there.

Finally, draw The Lesson. This card signifies what the theme of the postmortem around The Action should be. This can help you guide future discussions about what went wrong and how to avoid it in the future. This may result in charged feelings, but it really is for the best to go through the entire postmortem process to help you get the closure that you need. This postmortem will usually help bring things to the surface that you have missed before. There should be no blame or anger. This is a place of healing and growth, not of hate and strife.

Optionally you can draw The Metaresult, or what will happen as a result of The Lesson. This isn’t strictly required but I find it can help for peeking into a potential future where The Result is taken to heart.

I hope this is able to help you in your debugging needs. I use this strategy when I am trying to understand complicated computer systems and how they all fit together. Be well.


How to Use User Mode Linux

Permalink - Posted on 2019-07-07 00:00, modified on 0001-01-01 00:00

How to Use User Mode Linux

User Mode Linux is a port of the Linux kernel to itself. This allows you to run a full blown Linux kernel as a normal userspace process. This is used by kernel developers for testing drivers, but is also useful as a generic isolation layer similar to virtual machines. It provides slightly more isolation than Docker, but slightly less isolation than a full-blown virtual machine like KVM or VirtualBox.

In general, this may sound like a weird and hard to integrate tool, but it does have its uses. It is an entire Linux kernel running as a normal user. This allows you to run potentially untrusted code without affecting the host machine. It also allows you to test experimental system configuration changes without having to reboot or take its services down.

Also, because this kernel and its processes are isolated from the host machine, this means that processes running inside a user mode Linux kernel will not be visible to the host machine. This is unlike a Docker container, where processes in those containers are visible to the host. See this (snipped) pstree output from one of my servers:

containerd─┬─containerd-shim─┬─tini─┬─dnsd───19*[{dnsd}]
           │                 │      └─s6-svscan───s6-supervise
           │                 └─10*[{containerd-shim}]
           ├─containerd-shim─┬─tini─┬─aerial───21*[{aerial}]
           │                 │      └─s6-svscan───s6-supervise
           │                 └─10*[{containerd-shim}]
           ├─containerd-shim─┬─tini─┬─s6-svscan───s6-supervise
           │                 │      └─surl
           │                 └─9*[{containerd-shim}]
           ├─containerd-shim─┬─tini─┬─h───13*[{h}]
           │                 │      └─s6-svscan───s6-supervise
           │                 └─10*[{containerd-shim}]
           ├─containerd-shim─┬─goproxy───14*[{goproxy}]
           │                 └─9*[{containerd-shim}]
           └─32*[{containerd}]

Compare it to the user mode Linux pstree output:

linux─┬─5*[linux]
      └─slirp

With a Docker container, I can see the names of the processes being run in the guest from the host. With a user mode Linux kernel, I cannot do this. This means that monitoring tools that function using Linux’s auditing subsystem cannot monitor processes running inside the guest. This could be a two-edged sword in some edge scenarios.

This post represents a lot of research and brute-force attempts at trying to do this. I have had to assemble things together using old resources, reading kernel source code, intense debugging of code that was last released when I was in elementary school, tracking down a Heroku buildpack with a pre-built binary for a tool I need and other hackery that made people in IRC call me magic. I hope that this post will function as reliable documentation for doing this with a modern kernel and operating system.

Setup

Setting up user mode Linux is done in a few steps:

  • Installing host dependencies
  • Downloading Linux
  • Configuring Linux
  • Building the kernel
  • Installing the binary
  • Setting up the guest filesystem
  • Creating the kernel command line
  • Setting up networking for the guest
  • Running the guest kernel

I am assuming that you are wanting to do this on Ubuntu or another Debian-like system. I have tried to do this from Alpine (my distro of choice), but I have been unsuccessful as the Linux kernel seems to have glibc-isms hard-assumed in the user mode Linux drivers. I plan to report these to upstream when I have debugged them further.

Installing Host Dependencies

Ubuntu requires at least the following packages installed to build the Linux kernel (assuming a completely fresh install):

  • build-essential
  • flex
  • bison
  • xz-utils
  • wget
  • ca-certificates
  • bc
  • linux-headers-4.15.0-47-generic (though any kernel version will do)

You can install these with the following command (as root or running with sudo):

apt-get -y install build-essential flex bison xz-utils wget ca-certificates bc \
                   linux-headers-4.15.0-47-generic

Additionally, running the menu configuration program for the Linux kernel will require installing libncurses-dev. Please make sure it’s installed using the following command (as root or running with sudo):

apt-get -y install libncurses-dev

Downloading the Kernel

Set up a location for the kernel to be downloaded and built. This will require approximately 1.3 gigabytes of space to run, so please make sure that there is at least this much space free.

Head to kernel.org and get the download URL of the latest stable kernel. As of the time of writing this post, this URL is the following:

https://cdn.kernel.org/pub/linux/kernel/v5.x/linux-5.1.16.tar.xz

Download this file with wget:

wget https://cdn.kernel.org/pub/linux/kernel/v5.x/linux-5.1.16.tar.xz

And extract it with tar:

tar xJf linux-5.1.16.tar.xz

Now enter the directory created by the tarball extraction:

cd linux-5.1.16

Configuring the Kernel

The kernel build system is a bunch of Makefiles with a lot of custom tools and scripts to automate builds. Open the interactive configuration program:

make ARCH=um menuconfig

It will build some things and then present you with a dialog interface. You can enable settings by pressing Space or Enter when <Select> is highlighted on the bottom of the screen. You can change which item is selected in the upper dialog with the and down arrow keys. You can change which item is highlighted on the bottom of the screen with the left and right arrow keys.

When there is a ---> at the end of a feature name, that means it is a submenu. You can enter a submenu using the Enter key. If you enter a menu you can exit it with <Exit>.

Enable the following settings with <Select>, making sure there is a [*] next to them:

UML-specific Options:
  - Host filesystem
Networking support (enable this to get the submenu to show up):
  - Networking options:
    - TCP/IP Networking
UML Network devices:
  - Virtual network device
  - SLiRP transport

Then exit back out to a shell by selecting <Exit> until there is a dialog asking you if you want to save your configuration. Select <Yes> and hit Enter.

I encourage you to play around with the build settings after reading through this post. You can learn a lot about Linux at a low level by changing flags and seeing how they affect the kernel at runtime.

Building the Kernel

The Linux kernel is a large program with a lot of things going on. Even with this rather minimal configuration, it can take a while on older hardware. Build the kernel with the following command:

make ARCH=um -j$(nproc)

This will tell make to use all available CPU cores/hyperthreads to build the kernel. The $(nproc) at the end of the build command tells the shell to paste in the output of the nproc command (this command is part of coreutils, which is a default package in Ubuntu).

After a while, the kernel will be built to ./linux.

Installing the Binary

Because user mode Linux builds a normal binary, you can install it like you would any other command line tool. Here’s the configuration I use:

mkdir -p ~/bin
cp linux ~/bin/linux

If you want, ensure that ~/bin is in your $PATH:

export PATH=$PATH:$HOME/bin

Setting up the Guest Filesystem

Create a home for the guest filesystem:

mkdir -p $HOME/prefix/uml-demo
cd $HOME/prefix

Open alpinelinux.org. Click on Downloads. Scroll down to where it lists the MINI ROOT FILESYSTEM. Right-click on the x86_64 link and copy it. As of the time of writing this post, the latest URL for this is:

http://dl-cdn.alpinelinux.org/alpine/v3.10/releases/x86_64/alpine-minirootfs-3.10.0-x86_64.tar.gz

Download this tarball to your computer:

wget -O alpine-rootfs.tgz http://dl-cdn.alpinelinux.org/alpine/v3.10/releases/x86_64/alpine-minirootfs-3.10.0-x86_64.tar.gz

Now enter the guest filesystem folder and extract the tarball:

cd uml-demo
tar xf ../alpine-rootfs.tgz

This will create a very minimal filesystem stub. Because of how this is being run, it will be difficult to install binary packages from Alpine’s package manager apk, but this should be good enough to work as a proof of concept.

The tool tini will be needed in order to prevent the guest kernel from having its memory used up by zombie processes.

Install it by doing the following:

wget -O tini https://github.com/krallin/tini/releases/download/v0.18.0/tini-static
chmod +x tini

Creating the Kernel Command Line

The Linux kernel has command line arguments like most other programs. To view what command line options are compiled into the user mode kernel, run --help:

linux --help
User Mode Linux v5.1.16
        available at http://user-mode-linux.sourceforge.net/

--showconfig
    Prints the config file that this UML binary was generated from.

iomem=<name>,<file>
    Configure <file> as an IO memory region named <name>.

mem=<Amount of desired ram>
    This controls how much "physical" memory the kernel allocates
    for the system. The size is specified as a number followed by
    one of 'k', 'K', 'm', 'M', which have the obvious meanings.
    This is not related to the amount of memory in the host.  It can
    be more, and the excess, if it's ever used, will just be swapped out.
        Example: mem=64M

--help
    Prints this message.

debug
    this flag is not needed to run gdb on UML in skas mode

root=<file containing the root fs>
    This is actually used by the generic kernel in exactly the same
    way as in any other kernel. If you configure a number of block
    devices and want to boot off something other than ubd0, you
    would use something like:
        root=/dev/ubd5

--version
    Prints the version number of the kernel.

umid=<name>
    This is used to assign a unique identity to this UML machine and
    is used for naming the pid file and management console socket.

con[0-9]*=<channel description>
    Attach a console or serial line to a host channel.  See
    http://user-mode-linux.sourceforge.net/old/input.html for a complete
    description of this switch.

eth[0-9]+=<transport>,<options>
    Configure a network device.
    
aio=2.4
    This is used to force UML to use 2.4-style AIO even when 2.6 AIO is
    available.  2.4 AIO is a single thread that handles one request at a
    time, synchronously.  2.6 AIO is a thread which uses the 2.6 AIO
    interface to handle an arbitrary number of pending requests.  2.6 AIO
    is not available in tt mode, on 2.4 hosts, or when UML is built with
    /usr/include/linux/aio_abi.h not available.  Many distributions don't
    include aio_abi.h, so you will need to copy it from a kernel tree to
    your /usr/include/linux in order to build an AIO-capable UML

nosysemu
    Turns off syscall emulation patch for ptrace (SYSEMU).
    SYSEMU is a performance-patch introduced by Laurent Vivier. It changes
    behaviour of ptrace() and helps reduce host context switch rates.
    To make it work, you need a kernel patch for your host, too.
    See http://perso.wanadoo.fr/laurent.vivier/UML/ for further
    information.

uml_dir=<directory>
    The location to place the pid and umid files.

quiet
    Turns off information messages during boot.

hostfs=<root dir>,<flags>,...
    This is used to set hostfs parameters.  The root directory argument
    is used to confine all hostfs mounts to within the specified directory
    tree on the host.  If this isn't specified, then a user inside UML can
    mount anything on the host that's accessible to the user that's running
    it.
    The only flag currently supported is 'append', which specifies that all
    files opened by hostfs will be opened in append mode.

This is a lot of output, but it explains the options available in detail. Let’s start up a kernel with a very minimal set of options:

linux \
  root=/dev/root \
  rootfstype=hostfs \
  rootflags=$HOME/prefix/uml-demo \
  rw \
  mem=64M \
  init=/bin/sh

This tells the guest kernel to do the following things:

  • Assume the root filesystem is the pseudo-device /dev/root
  • Select hostfs as the root filesystem driver
  • Mount the guest filesystem we have created as the root device
  • In read-write mode
  • Use only 64 megabytes of ram (you can get away with far less depending on what you are doing, but 64 MB seems to be a happy medium)
  • Have the kernel automatically start /bin/sh as the init process

Run this command, you should get something like the following output:

Core dump limits :
        soft - 0
        hard - NONE
Checking that ptrace can change system call numbers...OK
Checking syscall emulation patch for ptrace...OK
Checking advanced syscall emulation patch for ptrace...OK
Checking environment variables for a tempdir...none found
Checking if /dev/shm is on tmpfs...OK
Checking PROT_EXEC mmap in /dev/shm...OK
Adding 32137216 bytes to physical memory to account for exec-shield gap
Linux version 5.1.16 (cadey@kahless) (gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)) #30 Sun Jul 7 18:57:19 UTC 2019
Built 1 zonelists, mobility grouping on.  Total pages: 23898
Kernel command line: root=/dev/root rootflags=/home/cadey/dl/uml/alpine rootfstype=hostfs rw mem=64M init=/bin/sh
Dentry cache hash table entries: 16384 (order: 5, 131072 bytes)
Inode-cache hash table entries: 8192 (order: 4, 65536 bytes)
Memory: 59584K/96920K available (2692K kernel code, 708K rwdata, 588K rodata, 104K init, 244K bss, 37336K reserved, 0K cma-reserved)
SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1
NR_IRQS: 15
clocksource: timer: mask: 0xffffffffffffffff max_cycles: 0x1cd42e205, max_idle_ns: 881590404426 ns
Calibrating delay loop... 7479.29 BogoMIPS (lpj=37396480)
pid_max: default: 32768 minimum: 301
Mount-cache hash table entries: 512 (order: 0, 4096 bytes)
Mountpoint-cache hash table entries: 512 (order: 0, 4096 bytes)
Checking that host ptys support output SIGIO...Yes
Checking that host ptys support SIGIO on close...No, enabling workaround
devtmpfs: initialized
random: get_random_bytes called from setup_net+0x48/0x1e0 with crng_init=0
Using 2.6 host AIO
clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604462750000 ns
futex hash table entries: 256 (order: 0, 6144 bytes)
NET: Registered protocol family 16
clocksource: Switched to clocksource timer
NET: Registered protocol family 2
tcp_listen_portaddr_hash hash table entries: 256 (order: 0, 4096 bytes)
TCP established hash table entries: 1024 (order: 1, 8192 bytes)
TCP bind hash table entries: 1024 (order: 1, 8192 bytes)
TCP: Hash tables configured (established 1024 bind 1024)
UDP hash table entries: 256 (order: 1, 8192 bytes)
UDP-Lite hash table entries: 256 (order: 1, 8192 bytes)
NET: Registered protocol family 1
console [stderr0] disabled
mconsole (version 2) initialized on /home/cadey/.uml/tEwIjm/mconsole
Checking host MADV_REMOVE support...OK
workingset: timestamp_bits=62 max_order=14 bucket_order=0
Block layer SCSI generic (bsg) driver version 0.4 loaded (major 254)
io scheduler noop registered (default)
io scheduler bfq registered
loop: module loaded
NET: Registered protocol family 17
Initialized stdio console driver
Using a channel type which is configured out of UML
setup_one_line failed for device 1 : Configuration failed
Using a channel type which is configured out of UML
setup_one_line failed for device 2 : Configuration failed
Using a channel type which is configured out of UML
setup_one_line failed for device 3 : Configuration failed
Using a channel type which is configured out of UML
setup_one_line failed for device 4 : Configuration failed
Using a channel type which is configured out of UML
setup_one_line failed for device 5 : Configuration failed
Using a channel type which is configured out of UML
setup_one_line failed for device 6 : Configuration failed
Using a channel type which is configured out of UML
setup_one_line failed for device 7 : Configuration failed
Using a channel type which is configured out of UML
setup_one_line failed for device 8 : Configuration failed
Using a channel type which is configured out of UML
setup_one_line failed for device 9 : Configuration failed
Using a channel type which is configured out of UML
setup_one_line failed for device 10 : Configuration failed
Using a channel type which is configured out of UML
setup_one_line failed for device 11 : Configuration failed
Using a channel type which is configured out of UML
setup_one_line failed for device 12 : Configuration failed
Using a channel type which is configured out of UML
setup_one_line failed for device 13 : Configuration failed
Using a channel type which is configured out of UML
setup_one_line failed for device 14 : Configuration failed
Using a channel type which is configured out of UML
setup_one_line failed for device 15 : Configuration failed
Console initialized on /dev/tty0
console [tty0] enabled
console [mc-1] enabled
Failed to initialize ubd device 0 :Couldn't determine size of device's file
VFS: Mounted root (hostfs filesystem) on device 0:11.
devtmpfs: mounted
This architecture does not have kernel memory protection.
Run /bin/sh as init process
/bin/sh: can't access tty; job control turned off
random: fast init done
/ # 

This gives you a very minimal system, without things like /proc mounted, or a hostname assigned. Try the following commands:

  • uname -av
  • cat /proc/self/pid
  • hostname

To exit this system, type in exit or press Control-d. This will kill the shell, making the guest kernel panic:

/ # exit
Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000000
fish: “./linux root=/dev/root rootflag…” terminated by signal SIGABRT (Abort)

This kernel panic happens because the Linux kernel always assumes that its init process is running. Without this process running, the system cannot function anymore and exits. Because this is a user mode process, this results in the process sending itself SIGABRT, causing it to exit.

Setting up Networking for the Guest

This is about where things get really screwy. Networking for a user mode Linux system is where the “user mode” facade starts to fall apart. Networking at the system level is usually limited to privileged execution modes, for very understandable reasons.

The slirp Adventure

However, there’s an ancient and largely unmaintained tool called slirp that user mode Linux can interface with. It acts as a user-level TCP/IP stack and does not rely on any elevated permissions to run. This tool was first released in 1995, and its last release was made in 2006. This tool is old enough that compilers have changed so much in the meantime that the software has effectively rotten.

So, let’s install slirp from the Ubuntu repositories and test running it:

sudo apt-get install slirp
/usr/bin/slirp
Slirp v1.0.17 (BETA)

Copyright (c) 1995,1996 Danny Gasparovski and others.
All rights reserved.
This program is copyrighted, free software.
Please read the file COPYRIGHT that came with the Slirp
package for the terms and conditions of the copyright.

IP address of Slirp host: 127.0.0.1
IP address of your DNS(s): 1.1.1.1, 10.77.0.7
Your address is 10.0.2.15
(or anything else you want)

Type five zeroes (0) to exit.

[autodetect SLIP/CSLIP, MTU 1500, MRU 1500, 115200 baud]

SLiRP Ready ...
fish: “/usr/bin/slirp” terminated by signal SIGSEGV (Address boundary error)

Oh dear. Let’s install the debug symbols for slirp and see if we can tell what’s going on:

sudo apt-get install gdb slirp-dbgsym
gdb /usr/bin/slirp
GNU gdb (Ubuntu 8.1-0ubuntu3) 8.1.0.20180409-git
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /usr/bin/slirp...Reading symbols from /usr/lib/debug/.build-id/c6/2e75b69581a1ad85f72ac32c0d7af913d4861f.debug...done.
done.
(gdb) run
Starting program: /usr/bin/slirp
Slirp v1.0.17 (BETA)

Copyright (c) 1995,1996 Danny Gasparovski and others.
All rights reserved.
This program is copyrighted, free software.
Please read the file COPYRIGHT that came with the Slirp
package for the terms and conditions of the copyright.

IP address of Slirp host: 127.0.0.1
IP address of your DNS(s): 1.1.1.1, 10.77.0.7
Your address is 10.0.2.15
(or anything else you want)

Type five zeroes (0) to exit.

[autodetect SLIP/CSLIP, MTU 1500, MRU 1500, 115200 baud]

SLiRP Ready ...

Program received signal SIGSEGV, Segmentation fault.
                                                    ip_slowtimo () at ip_input.c:457
457     ip_input.c: No such file or directory.

It fails at this line. Let’s see the detailed stacktrace to see if anything helps us:

(gdb) bt full
#0  ip_slowtimo () at ip_input.c:457
        fp = 0x55784a40
#1  0x000055555556a57c in main_loop () at ./main.c:980
        so = <optimized out>
        so_next = <optimized out>
        timeout = {tv_sec = 0, tv_usec = 0}
        ret = 0
        nfds = 0
        ttyp = <optimized out>
        ttyp2 = <optimized out>
        best_time = <optimized out>
        tmp_time = <optimized out>
#2  0x000055555555b116 in main (argc=1, argv=0x7fffffffdc58) at ./main.c:95
No locals.

So it’s failing in its main loop while it is trying to check if any timeouts occured. This is where I had to give up trying to debug this further. Let’s see if building it from source works. I re-uploaded the tarball from Sourceforge because downloading tarballs from Sourceforge from the command line is a pain.

cd ~/dl
wget https://xena.greedo.xeserv.us/files/slirp-1.0.16.tar.gz
tar xf slirp-1.0.16.tar.gz
cd slirp-1.0.16/src
./configure --prefix=$HOME/prefix/slirp
make

This spews warnings about undefined inline functions. This then fails to link the resulting binary. It appears that at some point between the release of this software and the current day, gcc stopped creating symbols for inline functions in intermediate compiled files. Let’s try to globally replace the inline keyword with an empty comment to see if that works:

vi slirp.h
:6
a
<enter>
#define inline /**/
<escape>
:wq
make

Nope. That doesn’t work either. It continues to fail to find the symbols for those inline functions.

This is when I gave up. I started searching GitHub for Heroku buildpacks that already had this implemented or done. My theory was that a Heroku buildpack would probably include the binaries I needed, so I searched for a bit and found this buildpack. I downloaded it and extracted uml.tar.gz and found the following files:

total 6136
-rwxr-xr-x 1 cadey cadey   79744 Dec 10  2017 ifconfig*
-rwxr-xr-x 1 cadey cadey     373 Dec 13  2017 init*
-rwxr-xr-x 1 cadey cadey  149688 Dec 10  2017 insmod*
-rwxr-xr-x 1 cadey cadey   66600 Dec 10  2017 route*
-rwxr-xr-x 1 cadey cadey  181056 Jun 26  2015 slirp*
-rwxr-xr-x 1 cadey cadey 5786592 Dec 15  2017 uml*
-rwxr-xr-x 1 cadey cadey     211 Dec 13  2017 uml_run*

That’s a slirp binary! Does it work?

./slirp
Slirp v1.0.17 (BETA) FULL_BOLT

Copyright (c) 1995,1996 Danny Gasparovski and others.
All rights reserved.
This program is copyrighted, free software.
Please read the file COPYRIGHT that came with the Slirp
package for the terms and conditions of the copyright.

IP address of Slirp host: 127.0.0.1
IP address of your DNS(s): 1.1.1.1, 10.77.0.7
Your address is 10.0.2.15
(or anything else you want)

Type five zeroes (0) to exit.

[autodetect SLIP/CSLIP, MTU 1500, MRU 1500]

SLiRP Ready ...

It’s not immediately crashing, so I think it should be good! Let’s copy this binary to ~/bin/slirp:

cp slirp ~/bin/slirp

Just in case the person who created this buildpack takes it down, I have mirrored it.

Configuring Networking

Now let’s configure networking on our guest. Adjust your kernel command line:

linux \
  root=/dev/root \
  rootfstype=hostfs \
  rootflags=$HOME/prefix/uml-demo \
  rw \
  mem=64M \
  eth0=slirp,,$HOME/bin/slirp \
  init=/bin/sh

We should get that shell again. Let’s enable networking:

mount -t proc proc proc/
mount -t sysfs sys sys/

ifconfig eth0 10.0.2.14 netmask 255.255.255.240 broadcast 10.0.2.15
route add default gw 10.0.2.2

The first two commands set up /proc and /sys, which are required for ifconfig to function. The ifconfig command sets up the network interface to communicate with slirp. The route command sets the kernel routing table to force all traffic over the slirp tunnel. Let’s test with a DNS query:

nslookup google.com 8.8.8.8
Server:    8.8.8.8
Address 1: 8.8.8.8 dns.google

Name:      google.com
Address 1: 172.217.12.206 lga25s63-in-f14.1e100.net
Address 2: 2607:f8b0:4006:81b::200e lga25s63-in-x0e.1e100.net

That works!

Let’s automate this with a shell script:

#!/bin/sh
# init.sh

mount -t proc proc proc/
mount -t sysfs sys sys/
ifconfig eth0 10.0.2.14 netmask 255.255.255.240 broadcast 10.0.2.15
route add default gw 10.0.2.2

echo "networking set up"

exec /tini /bin/sh

and mark it executable:

chmod +x init.sh

and then change the kernel command line:

linux \
  root=/dev/root \
  rootfstype=hostfs \
  rootflags=$HOME/prefix/uml-demo \
  rw \
  mem=64M \
  eth0=slirp,,$HOME/bin/slirp \
  init=/init.sh

Then re-run it:

SLiRP Ready ...
networking set up
/bin/sh: can't access tty; job control turned off

nslookup google.com 8.8.8.8
Server:    8.8.8.8
Address 1: 8.8.8.8 dns.google

Name:      google.com
Address 1: 172.217.12.206 lga25s63-in-f14.1e100.net
Address 2: 2607:f8b0:4004:800::200e iad30s09-in-x0e.1e100.net

And networking works reliably!

Dockerfile

So that you can more easily test this, I have created a Dockerfile that automates most of these steps and should result in a working setup. I have a pre-made kernel configuration that should do everything outlined in this post, but this post outlines a more minimal setup.


I hope this post is able to help you understand how to do this. This became a bit of a monster, but this should be a comprehensive guide on how to build, install and configure user mode Linux for modern operating systems. Next steps from here should include installing services and other programs into the guest system. Since Docker container images are just glorified tarballs, you should be able to extract an image with docker export and then set the root filesystem location in the guest kernel to that location. Then run the command that the Dockerfile expects via a shell script.

Special thanks to rkeene of #lobsters on Freenode. Without his help with attempting to debug slirp, I wouldn’t have gotten this far. I have no idea how his Slackware system works fine with slirp but my Ubuntu and Alpine systems don’t, and why the binary he gave me also didn’t work; but I got something working and that’s good enough for me.


The h Programming Language

Permalink - Posted on 2019-06-30 00:00, modified on 0001-01-01 00:00

The h Programming Language

h is a project of mine that I have released recently. It is a single-paradigm, multi-tenant friendly, turing-incomplete programming language that does nothing but print one of two things:

  • the letter h
  • a single quote (the Lojbanic “h”)

It does this via WebAssembly. This may sound like a pointless complication, but actually this ends up making things a lot simpler. WebAssembly is a virtual machine (fake computer that only exists in code) intended for browsers, but I’ve been using it for server-side tasks.

I have written more about/with WebAssembly in the past in these posts:

This is a continuation of the following two posts:

All of the relevant code for h is here.

h is a somewhat standard three-phase compiler. Each of the phases is as follows:

Parsing the Grammar

As mentioned in a prior post, h has a formal grammar defined in Parsing Expression Grammar. I took this grammar (with some minor modifications) and fed it into a tool called peggy to generate a Go source version of the parser. This parser has some minimal wrappers around it, mostly to simplify the output and remove unneeded nodes from the tree. This simplifies the later compilation phases.

The input to h looks something like this:

h

The output syntax tree pretty-prints to something like this:

H("h")

This is also represented using a tree of nodes that looks something like this:

&peg.Node{
    Name: "H",
    Text: "h",
    Kids: nil,
}

A more complicated program will look something like this:

&peg.Node{
    Name: "H",
    Text: "h h h",
    Kids: {
        &peg.Node{
            Name: "",
            Text: "h",
            Kids: nil,
        },
        &peg.Node{
            Name: "",
            Text: "h",
            Kids: nil,
        },
        &peg.Node{
            Name: "",
            Text: "h",
            Kids: nil,
        },
    },
}

Now that we have this syntax tree, it’s easy to go to the next phase of compilation: generating the WebAssembly Text Format.

WebAssembly Text Format

WebAssembly Text Format is a human-editable and understandable version of WebAssembly. It is pretty low level, but it is actually fairly simple. Let’s take an example of the h compiler output and break it down:

(module
 (import "h" "h" (func $h (param i32)))
 (func $h_main
       (local i32 i32 i32)
       (local.set 0 (i32.const 10))
       (local.set 1 (i32.const 104))
       (local.set 2 (i32.const 39))
       (call $h (get_local 1))
       (call $h (get_local 0))
 )
 (export "h" (func $h_main))
)

Fundamentally, WebAssembly binary files are also called modules. Each .wasm file can have only one module defined in it. Modules can have sections that contain the following information:

  • External function imports
  • Function definitions
  • Memory information
  • Named function exports
  • Global variable definitions
  • Other custom data that may be vendor-specific

h only uses external function imports, function definitions and named function exports.

import imports a function from the surrounding runtime with two fields: module and function name. Because this is an obfuscated language, the function h from module h is imported as $h. This function works somewhat like the C library function putchar().

func creates a function. In this case we are creating a function named $h_main. This will be the entrypoint for the h program.

Inside the function $h_main, there are three local variables created: 0, 1 and 2. They correlate to the following values:

Local Number Explanation Integer Value
0 Newline character 10
1 Lowercase h 104
2 Single quote 39

As such, this program prints a single lowercase h and then a newline.

export lets consumers of this WebAssembly module get a name for a function, linear memory or global value. As we only need one function in this module, we export $h_main as "h".

Compiling this to a Binary

The next phase of compiling is to turn this WebAssembly Text Format into a binary. For simplicity, the tool wat2wasm from the WebAssembly Binary Toolkit is used. This tool creates a WebAssembly binary out of WebAssembly Text Format.

Usage is simple (assuming you have the WebAssembly Text Format file above saved as h.wat):

wat2wasm h.wat -o h.wasm

And you will create h.wasm with the following sha256 sum:

sha256sum h.wasm
8457720ae0dd2deee38761a9d7b305eabe30cba731b1148a5bbc5399bf82401a  h.wasm

Now that the final binary is created, we can move to the runtime phase.

Runtime

The h runtime is incredibly simple. It provides the h.h putchar-like function and executes the h function from the binary you feed it. It also times execution as well as keeps track of the number of instructions the program runs. This is called “gas” for historical reasons involving blockchains.

I use Perlin Network’s life as the implementation of WebAssembly in h. I have experience with it from Olin.

The Playground

As part of this project, I wanted to create an interactive playground. This allows users to run arbitrary h programs on my server. As the only system call is putchar, this is safe. The playground also has some limitations on how big of a program it can run. The playground server works like this:

The output of this call looks something like this:

curl -H "Content-Type: text/plain" --data "h" https://h.christine.website/api/playground | jq
{
  "prog": {
    "src": "h",
    "wat": "(module\n (import \"h\" \"h\" (func $h (param i32)))\n (func $h_main\n       (local i32 i32 i32)\n       (local.set 0 (i32.const 10))\n       (local.set 1 (i32.const 104))\n       (local.set 2 (i32.const 39))\n       (call $h (get_local 1))\n       (call $h (get_local 0))\n )\n (export \"h\" (func $h_main))\n)",
    "bin": "AGFzbQEAAAABCAJgAX8AYAAAAgcBAWgBaAAAAwIBAQcFAQFoAAEKGwEZAQN/QQohAEHoACEBQSchAiABEAAgABAACw==",
    "ast": "H(\"h\")"
  },
  "res": {
    "out": "h\n",
    "gas": 11,
    "exec_duration": 12345
  }
}

The execution duration is in nanoseconds, as it is just directly a Go standard library time duration.

Bugs h has Found

This will be updated in the future, but h has already found a bug in Innative. There was a bug in how Innative handled C name mangling of binaries. Output of the h compiler is now a test case in Innative. I consider this a success for the project. It is such a little thing, but it means a lot to me for some reason. My shitpost created a test case in a project I tried to integrate it with.

That’s just awesome to me in ways I have trouble explaining.

As such, h programs do work with Innative. Here’s how to do it:

First, install the h compiler and runtime with the following command:

go get within.website/x/cmd/h

This will install the h binary to your $GOPATH/bin, so ensure that is part of your path (if it is not already):

export GOPATH=$HOME/go
export PATH=$PATH:$GOPATH/bin

Then create a h binary like this:

h -p "h h" -o hh.wasm

Now we need to provide Innative the h.h system call implementation, so open h.c and enter in the following:

#include <stdio.h>

void h_WASM_h(char data) {
  putchar(data);
}

Then build it to an object file:

gcc -c -o h.o h.c

Then pack it into a static library .ar file:

ar rsv libh.a h.o

Then create the shared object with Innative:

innative-cmd -l ./libh.a hh.wasm

This should create hh.so in the current working directory.

Now create the following Nim wrapper at h.nim:

proc hh_WASM_h() {. importc, dynlib: "./hh.so" .}

hh_WASM_h()

and build it:

nim c h.nim

then run it:

./h
h

And congrats, you have now compiled h to a native shared object.

Why

Now, something you might be asking yourself as you read through this post is something like: “Why the heck are you doing this?” That’s honestly a good question. One of the things I want to do with computers is to create art for the sake of art. h is one of these such projects. h is not a productive tool. You cannot create anything useful with h. This is an exercise in creating a compiler and runtime from scratch, based on my past experiences with parsing lojban, WebAssembly on the server and frustrating marketing around programming tools. I wanted to create something that deliberately pokes at all of the common ways that programming languages and tooling are advertised. I wanted to make it a fully secure tool as well, with an arbitrary limitation of having no memory usage. Everything is fully functional. There are a few grammar bugs that I’m calling features.


OVE-20190623-0001

Permalink - Posted on 2019-06-24 00:00, modified on 0001-01-01 00:00

OVE-20190623-0001

Within Security Advisory

Root-level Remote Command Injection in the V playground (OVE-20190623-0001)

The real CVEs are the friends we made along the way

awilfox

Summary

While playing with the V playground, a root-level command injection vulnerability was discovered. This allows for an unauthenticated attacker to execute arbitrary root-level commands on the playground server.

This vulnerability is instantly exploitable by a remote, unauthenticated attacker in the default configuration. To remotely exploit this vulnerability, an attacker must send specially created HTTP requests to the playground server containing a malformed function call.

This playground server is not open sourced or versioned yet, but this vulnerability has lead to the compromising of the box as reported by the lead developer of V.

Remote Exploitation

V allows for calling of C functions through a few means:

  • starting a line with a # character
  • calling a C function with the C. namespace

The V playground insufficiently strips the latter form of the function call, allowing an invocation such as this:

fn main() {
  C .system(' id')
}

or even this:

fn main() {
	C
		.system(' id')
}

As the server is running as the root user, successful exploitation can result in an unauthenticated user totally compromising the system, as happened earlier yesterday on June 23, 2019. As the source code and configuration of the V playground server is unknown, it is not possible to track usage of these commands.

The playground did attempt to block these attacks; but it appeared to do pattern matching on # or C., allowing the alternative methods mentioned above.

Security Suggestions

Do not run the playground server as a root user outside a container or other form of isolation. The fact that this server runs user-submitted code makes this kind of thing very difficult to isolate and/or secure properly. The use of an explicit sandboxing environment like gVisor or Docker is suggested. The use of more elaborate sandboxing mechanisms like CloudABI or WebAssembly may be practical for future developments, but is admittedly out of scope for this initial class of issues.

GReeTZ

Special thanks to the people of #ponydev for helping to discover and toy with this bug.

Timeline

All times are Eastern Standard Time.

June 23, 2019

  • 4:56 PM - The first exploit was found and the contents of /etc/passwd were dumped, other variants of this attack were proposed and tested in the meantime
  • 5:00 PM - The V playground server stopped replying to HTTP and ICMP messages
  • 6:26 PM - The V creator was notified of this issue
  • 7:02 PM - The V creator acknowledged the issue and admitted the machine was compromised

June 24, 2019

  • 12:00 AM - This security bulletin was released


V is for Vaporware

Permalink - Posted on 2019-06-23 00:00, modified on 0001-01-01 00:00

V is for Vaporware

V is a programming language that has been hyped a lot. As it’s recently had its first alpha release, I figured it would be a good idea to step through it and see if it lives up to the promises that the author has been claiming for months.

The V website claims the following on the front page:

  • The compiler compiles 1.2 million lines of code compiled per CPU core per second
  • The resulting code is as fast as C
  • Built-in serialization without runtime reflection
  • Minimal amount of allocations
  • Zero dependencies
  • Requires only 0.4 MB of space to build
  • Able to translate arbitrary C/C++ code to V and build it faster than C/C++
  • Hot code reloading
  • 2d/3d graphics support in the standard library
  • Effortless cross-compilation
  • A powerful built-in web framework
  • The compiler generates direct machine code

As far as I can tell, all of the above features are either “work-in-progress” or completely absent from the source repository.

Speed

The author mentions that the compiler is fast, stating the following:

Fast compilation

V compiles ≈1.2 million lines of code per second per CPU core. (Intel i5-7500 @ 3.40GHz, SM0256L SSD, no optimization)

Such speed is achieved by direct machine code generation [wip] and a strong modularity.

V can also emit C, then the compilation speed drops to ≈100k lines/second/CPU.

Direct machine code generation is at a very early stage. Right now only x64/Mach-O is supported. This means that for now emitting C has to be used. By the end of this year x64 generation should be stable enough.

This has a few pretty fantastic claims. Let’s see if they can be replicated. Creating a 1.2 million line of code file should be pretty easy:

-- lua
print "fn main() {"

for i = 0, 1200000, 1
do
  print "println('hello, world ')"
end

print "}"

Then let’s run this script to generate the 1.2 million lines of code:

$ time lua5.3 ./gencode.lua > 1point2mil.v
        4.29 real         0.83 user         3.27 sys

And compile the resulting file:

$ time v 1point2mil.v
pass=2 fn=`main`
panic: 1point2mil.v:50003
more than 50 000 statements in function `main`
        2.43 real         2.13 user         0.15 sys

Oh boy. It’s also worth noting that it was more than 2 seconds to only compile 50,000 lines of code on my Core m7 12” MacBook.

No Dependencies

V claims to have zero dependencies. Again quoting from the website:

400 KB compiler with zero [wip] dependencies

The entire language and its standard library are less than 400 KB. V is written in V, and you can build it in 0.4 seconds.

(By the end of this year this number will drop to ≈0.15 seconds.)

Right now the V compiler does have one dependency: a C compiler. But it’s needed to bootstrap the language anyway, and if you are doing development, chances are you already have a C compiler installed.

It’s a small dependency, and it’s not going to be needed once x64 generation is mature enough.

AMD64 is not the only CPU architecture that exists, but okay I’ll take that you are only targeting the most common one.

Digging through the readme, its graphics library and HTTP support require some dependencies:

In order to build Tetris and anything else using the graphics module, you will need to install glfw and freetype.

If you plan to use the http package, you also need to install libcurl.

glfw and libcurl dependencies will be removed soon.

Ubuntu:
sudo apt install glfw libglfw3-dev libfreetype6-dev libcurl3-dev

macOS:
brew install glfw freetype curl

I’m sorry, but this combined with the explicit dependency on a C compiler means that V has dependencies. Now, breaking the grammar down pretty literally it says the compiler has zero dependencies. Let’s see what ldd says about the compiler when built on Linux:

$ ldd v
        linux-vdso.so.1 (0x00007ffc0f02e000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f356c6cc000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f356c2db000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f356cb25000)

So the compiler with “zero dependencies” is a dynamically linked binary with dependencies on libpthread and libc (the other two are glibc-specific).

Also of note, I had to modify the Makefile in order to get it to build on Linux without segfaulting every time it tried to compile code:

$ git diff
diff --git a/compiler/Makefile b/compiler/Makefile
index e29d30d..353824d 100644
--- a/compiler/Makefile
+++ b/compiler/Makefile
@@ -4,7 +4,7 @@ v: vc
        ./vc -o v .

 vc: v.c
-       cc -std=c11 -w -o vc v.c
+       clang -Dlinux -std=c11 -w -o vc v.c

 v.c:
        wget https://vlang.io/v.c

Otherwise it would segfault every time I tried to run it with:

$ ./v --help
fish: “./v --help” terminated by signal SIGSEGV (Address boundary error)

Before I added the -Dlinux flag, it also failed compile with the following error:

$ make
clang -std=c11 -w -o vc v.c
./vc -o v .
cc: error: unrecognized command line option ‘-mmacosx-version-min=10.7’
V panic: clang error
Makefile:4: recipe for target 'v' failed
make: *** [v] Error 1

Implying that the compiler was falsely detecting Linux as macOS.

Memory Safety

V claims to be memory-safe:

Memory management

There’s no garbage collection or reference counting. V cleans up what it can during compilation.

So I made a simple “hello world” program:

fn main() {
  println('hello world!') // V only supports single quoted strings
}

and built it on my Linux box with valgrind installed. Surely a “hello world” program has no good reason to leak memory, right?

$ time v hello.v
0.02user 0.00system 0:00.32elapsed 9%CPU (0avgtext+0avgdata 6196maxresident)k
0inputs+104outputs (0major+1162minor)pagefaults 0swaps

$ valgrind ./hello
==5860== Memcheck, a memory error detector
==5860== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==5860== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright info
==5860== Command: ./hello
==5860==
hello, world
==5860==
==5860== HEAP SUMMARY:
==5860==     in use at exit: 1,000 bytes in 1 blocks
==5860==   total heap usage: 2 allocs, 1 frees, 2,024 bytes allocated
==5860==
==5860== LEAK SUMMARY:
==5860==    definitely lost: 0 bytes in 0 blocks
==5860==    indirectly lost: 0 bytes in 0 blocks
==5860==      possibly lost: 0 bytes in 0 blocks
==5860==    still reachable: 1,000 bytes in 1 blocks
==5860==         suppressed: 0 bytes in 0 blocks
==5860== Rerun with --leak-check=full to see details of leaked memory
==5860==
==5860== For counts of detected and suppressed errors, rerun with: -v
==5860== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)

Looking at the generated C code it’s plainly obvious to see this memory leak. init_consts creates a 1000 byte allocation and never frees it. This is a memory leak that is unavoidable in any program compiled with V. This is potentially confusing for people who are trying to debug memory leaks in their V code. They will always be off by 1 allocation and 1000 bytes leaked without an easy way to tell why that is the case. The compiler itself also leaks memory:

$ valgrind v hello.v
==9096== Memcheck, a memory error detector
==9096== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==9096== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright info
==9096== Command: v hello.v
==9096==
==9096==
==9096== HEAP SUMMARY:
==9096==     in use at exit: 3,861,785 bytes in 24,843 blocks
==9096==   total heap usage: 25,588 allocs, 745 frees, 4,286,917 bytes allocated
==9096==
==9096== LEAK SUMMARY:
==9096==    definitely lost: 778,354 bytes in 18,773 blocks
==9096==    indirectly lost: 3,077,104 bytes in 6,020 blocks
==9096==      possibly lost: 0 bytes in 0 blocks
==9096==    still reachable: 6,327 bytes in 50 blocks
==9096==         suppressed: 0 bytes in 0 blocks
==9096== Rerun with --leak-check=full to see details of leaked memory
==9096==
==9096== For counts of detected and suppressed errors, rerun with: -v
==9096== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)

Space Required to Build

V also claims to only require 400-ish kilobytes of disk space to build itself. Let’s test this claim with a minimal Dockerfile:

FROM xena/alpine

RUN apk --no-cache add build-base libexecinfo-dev clang git \
 && git clone https://github.com/vlang/v /root/code/v \
 && cd /root/code/v/compiler \
 && wget https://vlang.io/v.c \
 && clang -Dlinux -std=c11 -w -o vc v.c \
 && ./vc -o v . \
 && du -sh /root/code/v /root/.vlang0.0.12 \
 && apk del clang

Except it doesn’t build on Alpine:

/usr/bin/ld: /tmp/v-c9fb07.o: in function `os__print_backtrace':
v.c:(.text+0x84d9): undefined reference to `backtrace'
/usr/bin/ld: v.c:(.text+0x8514): undefined reference to `backtrace_symbols_fd'
clang-8: error: linker command failed with exit code 1 (use -v to see invocation)

It looks like backtrace() is a glibc-specific addon. Let’s link against libexecinfo to fix this:

 && clang -Dlinux -lexecinfo -std=c11 -w -o vc v.c \
Cloning into '/root/code/v'...
Connecting to vlang.io (3.91.188.13:443)
v.c                  100% |********************************|  310k  0:00:00 ETA
Segmentation fault (core dumped)

Annoying, but we can adjust to Ubuntu fairly easily:

FROM ubuntu:latest

RUN apt update \
 && apt -y install wget build-essential clang git \
 && git clone https://github.com/vlang/v /root/code/v \
 && cd /root/code/v/compiler \
 && wget https://vlang.io/v.c \
 && clang -Dlinux -std=c11 -w -o vc v.c \
 && ./vc -o v . \
 && du -sh /root/code/v /root/.vlang0.0.12 \
 && apt -y remove clang

As of the time of writing this article, the image ubuntu:latest has an uncompressed size of 64.2MB. If the V compiler only requires 400 KB to build like it claims, the resulting image size for this Dockerfile should be around 65 MB at worst, right? the resulting du command should show 400 KB in total, right?

3.4M    /root/code/v
304K    /root/.vlang0.0.12

3.7 MB. That means the 400 KB claim is either a lie or “work-in-progress”. Coincidentally, the compiler uses about as much disk space as it leaks during the compilation of “Hello, world”.

HTTP Module

V has a http module. It leaves a lot to be desired. My favorite part is the implementation of download_file on macOS:

fn download_file(url, out string) {
	// println('\nDOWNLOAD FILE $out url=$url')
	// -L follow redirects
	// println('curl -L -o "$out" "$url"')
	os.system2('curl -s -L -o "$out" "$url"')
	// res := os.system('curl -s -L -o "$out" "$url"')
	// println(res)
}

This has no error checking (the function os.system2 returns the exit code of curl) and it shells out to curl instead of using libcurl. Other parts of the http module use libcurl correctly (though the HTTP status code, headers and other important metadata are not returned). There is also no support for overriding the HTTP transport, setting a custom TLS configuration or many other basic features that libcurl provides for free.

I wasn’t expecting it to have HTTP support out of the box, but even then I still feel disappointed.

Suggestions for Improvement

I would like to see V be a tool for productive development. I can’t see it doing that in the near future though. I would like to suggest the following to the V developer in order for them to be able to improve in the future:

Firstly, do not make claims about disk space, speed or dependencies without explaining what you mean by that in detail.

Do not shell out to arbitrary commands in the standard library for any reason. If an attacker can somehow run code on a server with a V binary that uses the download_file function, they can replace curl with a malicious binary that is able to do anything the attacker wants. This feels like a huge vulnerability, especially given that the playground allows you to run this function.

AMD64 is not the only processor architecture that exists. It’s nice that you’re supporting it, but this means that any program compiled with V will be stuck on that architecture. This also means that V cannot currently be used for systems programming like building a system-level package manager.

Do not leak memory in “Hello world”. You could solve the 1000 kilobyte leak by adding the following generated C code and calling it after the user-written main() function:

void destroy_consts() { free(g_str_buf); }

If you claim your compiler can support 1.2 million lines of code, do not make it have a limit of 50,000 statements in one function. Yes it is somewhat crazy to have 1.2 million statements in a single function, but as a compiler author it’s generally not your position to make these kinds of judgments. If the user wants to have 1.2 million statements in a function, let them.

Do not give code examples for libraries that you have not released. This means don’t show anything about the “built-in web framework” until you have code to back your claim. If there is no code to back it up, you have backed yourself into a corner where you are looking like you are lying. I would have loved to benchmark V’s web framework against Nim’s Jester and Go’s net/http, but I can’t.

Thanks for reading this far. I hope this feedback can help make V a productive tool for programming. It’s a shame it seems to have been hyped so much for comparatively so little as a result. The developer has been hyping and selling this language like it’s the new sliced bread. It is not. This is a very alpha product. I bet you could use it for productive development as is if you really stuck your head into it, but as it stands I recommend against using it for anything.


Untitled

Permalink - Posted on 2019-06-20 00:00, modified on 0001-01-01 00:00

Untitled

I walked down the forest path, my tail dragging behind me across the ground. I felt the patterns of the neatly yet somewhat randomly placed rocks beneath me as I stepped. There was a noise in the distance by this massive willow tree, it sounded like someone crying. I walked around the tree to where they were.

“Excuse me ma’am, why are you crying?”

She looked up at me, her brown hair filled with gray as she moved it away. Her eyes were red from the crying, with massive black bags under them like she hadn’t slept in a month. She looked right past my eyes, her eyes like daggers gripping the attention of my soul.

She sniffled for a few moments and then replied: “Oh…nothing. That is nothing you can do anything about, my child.”

“Are you sure there’s nothing I can do?”

“Yes my child. I’m crying because your species is killing itself. You all have let your poisons toxify the air. You have let your oceans become filled with the waste of your products. You have been killing my children, and the death toll is catching up.”

She turned back towards her tree and continued to weep, her tears feeding into a creek that lead towards black smoke billowing out of a chimney. I approached her and started to hug. “Ma’am, what is your name?”

“You don’t recognize me child? I am your mother, Gaia. You live on my planet, breathing my air, drinking my water. You children are meant to be in harmony with me and eachother; but so many have chosen the path of hate.” She started to weep again, her head resting on my shoulder. “Why has this happened? Why are you killing yourselves? Why can’t you see what your actions are doing to all of us?”

“I see it, mother. I just don’t really know what to do about it.”

She looked back up at me, her eyes glowing slightly golden as she continued to cry. “I don’t know if there is anything that can be done, you’ve been broiling our cetaceans to death. Your factories dump poisons into our rivers and air. Your plastic is everywhere, even the parts of it the eye can’t see. You have created fantastic wonders, but you’ve changed us in the process. I don’t know.”

I reached over with my tail, hugging with that too. She settled back into me and continued to cry.

“Thank you for at least reaching out to care, my child. So many of you exist but so few have gone this far. I don’t know how much longer I can continue to support your numbers. You have grown as a dominant force on the planet, but you have destroyed so much in your growth.”

The world started to bend and snap with my tears. I grabbed into Gaia for dear life and continued to hug. “I won’t forget this, mother.”

“I know you won’t, my child. Tell others what I have told you. Do not let this message go unheard, even if it’s only to one person.”

The world started to fade and I felt my bed beneath my back get more and more present as I sat there.

I woke up.


Advice to People Nurturing a Career in Computering

Permalink - Posted on 2019-06-18 00:00, modified on 0001-01-01 00:00

Advice to People Nurturing a Career in Computering

Computering, or making computers do things in exchange for money, can be a surprisingly hard field to break into as an outsider. There’s lots of jargon, tool holy wars, flamewars about the “right” way to do things and a whole host of overhead that can make it feel difficult or impossible when starting from scratch. I’m a college dropout, I know what it’s like to be turned down over and over because of the lack of that blessed square paper. In this post I hope to give some general advice based on what has and hasn’t worked for me over the years.

Hopefully this can help you too.

Make a Portfolio Site

When you are breaking into the industry, there is a huge initial “brand” issue. You’re nobody. This is both a very good thing and a very bad thing. It’s a very good thing because you have a clean slate to start from. It’s also a very bad thing because you have nothing to refer to yourself with.

Part of establishing a brand for yourself in this day and age is to make a website (like the one you are probably reading this off of right now). This website can be powered by anything. GitHub Pages with the github.io domain works, but it’s probably a better idea to make your website backend from scratch. Your website should include at least the following things:

  • Your name
  • A few buzzwords relating to the kind of thing you’d like to do with computers (example: I have myself listed as a “Backend Services and Devops Specialist” which sounds really impressive yet doesn’t really mean much of anything)
  • Tools or soft skills you are experienced with
  • Links to yourself on other social media platforms (GitHub, Twitter, LinkedIn, etc.)
  • Links to or words about projects of yours that you are proud of
  • Some contact information (an email address is a good idea too)

If you feel comfortable doing so, I’d also suggest putting your resume on this site too. Even if it’s just got your foodservice jobs or education history (including your high school diploma if need be).

This website can then be used as a landing page for other things in the future too. It’s your space on the internet. You get to decide what’s up there or not.

Make a Tech Blog On That Site

This has been the single biggest thing to help me grow professionally. I regularly put articles on my blog, sometimes not even about technology topics. Even if you are writing about your take on something people have already written about, it’s still good practice. Your early posts are going to be rough. It’s normal to not be an expert when starting out in a new skill.

This helps you stand out in the interview process. I’ve actually managed to skip interviews with companies purely because of the contents of my blog. One of them had the interviewer almost word for word say the following:

I’ve read your blog, you don’t need to prove technical understanding to me.

It was one of the most awestruck feelings I’ve ever had in the hiring process.

Find People to Mentor You

Starting out you are going to not be very skilled in anything. One good way you can help yourself get good at things is to go out into communities and ask for help understanding things. As you get involved in communities, naturally you will end up finding people who are giving a lot of advice about things. Don’t be afraid to ask people for more details.

Get involved in niche communities (like unpopular Linux distros) and help them out, even if it’s just doing spellcheck over the documentation. This kind of stuff really makes you stand out and people will remember it.

Formal mentorship is a very hard thing to try and define. It’s probably better to surround yourself with experts in various niche topics rather than looking for that one magic mentor. Mentorship can be a very time consuming thing on the expert’s side. Be thankful for what you can get and try and give back by helping other people too.

Seriously though, don’t be afraid to email or DM people for more information about topics that don’t make sense in group chats. I have found that people really appreciate that kind of stuff, even if they don’t immediately have the time to respond in detail.

Do Stuff with Computers, Post the Results Somewhere

Repository hosting sites like GitHub and Gitlab allow you to show potential employers exactly what you can do by example. Put your code up on them, even if you think it’s “bad” or the solution could have been implemented better by someone more technically skilled. The best way to get experience in this industry is by doing. The best way to do things is to just do them and then let other people see the results.

Your first programs will be inelegant, but that’s okay.
Your first repositories will be bloated or inefficient, but that’s okay.
Nobody expects perfection out of the gate, and honestly even for skilled experts perfection is probably too high of a bar. We’re human. We make mistakes. Our job is to turn the results of these mistakes into the products and services that people rely on.

You Don’t Need 100% Of The Job Requirements

Many companies put job requirements as soft guidelines, not hard ones. It’s easy to see requirements for jobs like this:

Applicants must have:

  • 1 year managing a distributed Flopnax system
  • Experience using Rilkef across multiple regions
  • Ropjar, HTML/CSS

and feel really disheartened. That “must” there seldom actually is a hard requirement. Many companies will be willing to hire someone for a junior level. You can learn the skills you miss as a natural part of doing your job. There’s support structures at nearly every company for things like this. You don’t need to be perfect out of the gate.

Interviews

This one is a bit of a weird one to give advice for. Each company ends up having their own interviewing style, and even then individual interviewers have their own views on how to do it. My advice here is trying to be as generic as possible.

Know the Things You Have Listed on Your Resume

If you say you know how to use a language, brush up on that language. If you say you know how to use a tool, be able to explain that what that tool does and why people should care about it to someone.

Don’t misrepresent your skills on your resume either. It’s similar to lying. It’s also a good idea to go back and prune out skills you don’t feel as fresh with over time.

Be Yourself

It’s tempting to put on a persona or try to present yourself as larger than life. Resist this temptation. They want to see you, not a caricature of yourself. It’s scary to do interviews at times. It feels like you are being judged. It’s not personal. Everything in interviews is aimed at making the best decision for the company.

Also, don’t be afraid to say you don’t know things. You don’t need to have API documentation memorized. They aren’t looking for that. API documentation will be available to you while you write code at your job. Interviews are usually there to help the interviewer verify that you know how to break larger problems into more understandable chunks. Ask questions. Ensure you understand what they are and are not asking you. Nearly every interview that I’ve had that’s resulted in a job offer has had me ask questions about what they are asking.

“Do You Have Any Questions?”

A few things I’ve found work really well for this:

  • “Do you know of anyone who left this company and then came back?”
  • “What is your favorite part of your workday?”
  • “What is your least favorite part of your workday?”
  • “Do postmortems have formal blame as a part of the process?”
  • “Does code get reviewed before it ships into production?”
  • “Are there any employee run interest groups for things like mindfulness?”

And then finally as your last question:

  • “What are the next steps?”

This question in particular tends to signal interest in the person interviewing you. I don’t completely understand why, but it seems to be one of the most useful questions to ask; especially with initial interviews with hiring managers or human resources.

Meditate Before Interviews

Even if it’s just watching your breath for 5 minutes. I find that doing this helps reset the mind and reduces subjective experiences of anxiety.

Persistence

Getting the first few real jobs is tough, but after you get a year or two at any employer things get a lot easier. Your first job is going to give you a lot of experience. You are going to learn things about things you didn’t even think would be possible to learn about. People, processes and the like are going to surprise or shock you.

At the end of the day though, it’s just a job. It’s impermanent. You might not fit in. You might have to find another. Don’t panic about it, even though it’s really, really tempting to. You can always find another job.


I hope this is able to help. Thanks for reading this and be well.


MrBeast is Postmodern Gold

Permalink - Posted on 2019-06-05 00:00, modified on 0001-01-01 00:00

Author’s note: I’ve been going through a lot lately. This Monday I was in the emergency room after having a panic attack. I have a folder of writing in my notes that I use to help work off steam. I don’t know why, but writing this article really helped me feel better. I can only hope it helps make your day feel better too.

MrBeast is Postmodern Gold

The year is 2019. Politicians have fallen asleep at the wheel. Capitalism controls large segments of the hearts and minds of the populace. Social class is increasingly only a construct. Popularity is becoming irrelevant. Money has no value. The ultimate expendability of entire groups of people is as obvious as the sunrise and sunset. Nothing feels real. There’s no real reason for people to get up and continue, yet life goes on. Somehow, even after a decade of aid and memes, children in Africa are still starving.

The next generation has grown up with technology and advertising. Entire swaths of the market know to ignore the very advertising that keeps the de-facto utilities (though the creators of those services will insist that it’s a free choice to use them) they use to communicate with friends alive. You have to unplug your cigarette (that your friend got you hooked to) to charge your book. Marketing has driven postmodernism to a whole new level that leads McDonalds to ask Wendys if they are okay after Wendys posts cryptic/confusing messages. Companies that just want to do business get blocked away by racist policies set by people who all but have died since. What can be done about this? Who should we turn to for quality entertainment to help quench this generational angst against a nameless, faceless machine that controls nearly all of functional civilization?

Enter MrBeast. This youtuber has reached new levels of content purely by making capitalism itself the content. With his crew of people and their peculiar views on life, they do a good job at making some quality content for this hyper-capitalist world that they have found themselves in.

One of the main ways that YouTube creators have been under fire lately is because of politically or otherwise topically charged content. MrBeast is completely devoid of anything close to politically sensitive or insensitive. It’s literally content about money and how it gets spent on things that get filmed and posted to YouTube in an effort to create more AdSense revenue in order to get even more money.

I don’t really know if there is a proper way to categorize this YouTuber. He really brings a unique feeling into everything he does with such a wholesome overall experience. Sponsorship money gets donated to twitch streamers and he makes videos of their reactions. He bought a house and had his friends put their hands on it, with the last one touching it to get the house. He went to every single Wal-Mart in the continental united states. He drove a lego car around his local town until he got pulled over by the cops. And yes like the YouTuber legend goes, he started many years ago doing Minecraft Let’s Plays as a screechy-voiced teenager.

Gluttony

Consider videos like this one where they spend an absurd amount of money eating five star meal food. “This first steak is called ‘Kobe (pronounced /ko.bi/) beef’ and we wanted to experience it because it cost [USD]$1000 and we wanted to see if it was worth the price.” Then they eat the steak and act like it’s no big deal, joking that each section of the meat is worth $30-40. “Alright bros, I’m PewDiePie and we just ate kobe (pronounced /ko.bei/) beef.

Then they go to another place (which has walls that are obviously plywood spray-painted black) and he offers one of his friends $100 to eat some random grasshopper. Chris eats it almost immediately. Everyone else in the room freaks out a little, commenting on the crunch sound. “That’s pretty good”. Garrett turns it down. Chandler also eats it without much hesitation, later commenting on the crunch of the chitin shell of the bug.

Then MrBeast offers a plate of crickets and grasshoppers to the three. He offers eating it for $1000. Chris sounds like he’s open to eating it, but offers the rest a chance. Garrett IMMEDIATELY turns it down. Chandler eats all of them at once. He has some issues chewing them (again with the crunch eeeeugh), but Chandler easily eats it all; instantly becoming a thousand dollars richer.

The room gags and laughs, the friendship between the boys $1200 stronger.

Then they go get goose liver served on rice and a hundred year old egg. Uh-oh, both of these are delicacies. How will they react?

The goose liver comes out first. MrBeast eats the hors d’œuvre in one bite. Chris has some trouble, but manages to take it down. Chandler is heaving. His friends cheer him on with loving words of compassion like “you don’t like liver?”

What.

The “century egg” comes out. They make the mistake of smelling it. Oh no. MrBeast eats it just fine. Chandler spits a $500 item of food into the trash after gagging. Chris ejects it into his napkin while MrBeast chants his name. Chris gags while his friends act like they are congratulating him. “It’s like someone hocked a loogie into your mouth.”

Before you ask, no, this isn’t an initiation stunt. They literally do this kind of stuff on a regular basis. Remember that money is the content here; so the fact that all of this stuff costs ridiculous amounts of money is the main reason for these videos to be created.

Later in the video, they drive to New York to eat gold-plated tomahawk steak. I’ve actually had tomahawk steak once and it was really good (thanks Uncle Marc). Where else to eat a golden steak than the golden steak?

“This is the most expensive restaurant we can find. If I don’t spend $10,000 all of you can punch me; because we will spend $10,000. What’s that name?”

Nobody can pronounce “Nurs-et”, the name of the restaurant. “None of us knew how to pronounce it, so it must be good.”

What.

It was good though.

Foolishness

In another video of his, he gets his friends to spend 24 hours in a horrific mockup of an “insane asylum”. For a first in these challenges, they split into two teams: Team Red and Team Black. Four of his crew are put into straitjackets with no other instructions.

They start predictably acting like a stereotypical American view of insane people. Twitching as they talk to the camera. Rolling around on the floor. “What is time?” Chandler is banging his head against the wall.

MrBeast: “Chris, how long do you think you’re gonna last?
Chris: “Banana sundae.”

“Insanity is repeating the same thing over and over again and expecting a different outcome.”

Much like Survivor, there’s cutaways to the individual teams as they plan out their high level strategy for the “game”. What. There is no strategy needed, they just need to sit in a room and be quiet for 24 hours. Reminds me of that one quote by Blaise Pascal in Pensées:

All of humanity’s problems stem from man’s inability to sit quietly in a room alone.

And no, these people can’t sit quietly in a room. You see them dancing back and forth in a line in front of the camera. They get locked into the room and the time-lapse shows 10 minutes of them walking around in circles.

The door gets yelled at. MrBeast notes the absurdity of the thing. The bright, unforgiving white walls of the asylum pierce the darkness of my room as I write this article.

“Help. Me. I. Need…I don’t need anything~”
“Y’all got any beans? Y’all got any baked beans?”

  • Chris

They raise someone on Chandler’s shoulders, not a small accomplishment considering they don’t have access to their arms. Someone speaks into the security camera: “Hello? I’m about to fall please go back down.”

MrBeast attempts to go into the room, go do snow angels and not say a single thing. The occupants have other plans, yelling when the door opens to alert eachother. They crowd around MrBeast, making it impossible to do his chosen task. They pin MrBeast to a corner and he tries to escape but then there’s a problem. The people won’t let him leave. He manages to get out.

Later MrBeast gets an idea to mess with the people. He gets a megaphone and puts it into siren mode, expecting them to not be able to turn it off. He is proven wrong almost instantly. They used their feet to turn it off. Then they start making noise with it. The megaphone is retrieved using the most heinous of weapons, an umbrella. A layer of duct tape is added and the experiment is repeated. They still manage to turn it off. They used their teeth. Low-light conditions didn’t stop them. Not having their hands didn’t stop them. Can anything stop these mad lads?

They attempt to retrieve the sound emitter again. The prisoners break it in retaliation. MrBeast seems okay with that, yet disappointed. However he suffers a casualty on his way out. MrBeast attempted to push back chandler using the holy umbrella. Chandler took the umbrella from him with nothing but his tied up arms.

What.

What is this video about again? What is the purpose? These people are getting money or something for being the last person standing? What is going on?

Oh, right, this is a challenge. The last two people to be in the room together win some amount of money.

Well the people are screaming for entertainment. That’s not unexpected, but that’s just how it goes I guess. Quality. Content.

Let’s have a dance party and then Chandler can poop. Rate who dances better in the comments section.



- MrBeast, 10:22-ish

What.

8 hours in, Chandler somehow dislocated his entire right arm. You can see it hanging there obviously out of place. It looks like he’s in massive pain. He tore a muscle. He was pulled out of the challenge. Another challenge lost by Chandler.

Chris drops out at 14 hours. The two winners are unsure what to do with themselves and their winnings. What are they again? Five grand? Chandler tore his shoulder out of socket and Chris risked ear damage for…FIVE GRAND?

What. Just what.

The entire channel is full of this stuff. I could go on for hours.


Also MrBeast if you’re reading this add me on Fortnite. I’d love to play some Duos with you and shitpost about the price of bananas.


WebAssembly on the Server: How System Calls Work

Permalink - Posted on 2019-05-31 00:00, modified on 0001-01-01 00:00

WebAssembly on the Server: How System Calls Work

Video

My Speaker Notes

  • Hi, my name is Christine. I work as a senior SRE for Lightspeed. Today I’m gonna talk about something I’ve been researching and learning a lot about: WebAssembly on the server.
  • Something a lot of you might be asking: what is WebAssembly?
    • WebAssembly is very new and there’s a lot of confusing and overly vague coverage on it.
    • In this talk, I will explain WebAssembly at a high level and show how to start solving one of the hardest problems in it: how to communicate with the outside world.
    • When I say the “outside world” I mean anything that is not literally one of these 5 basic things:
      • Externally imported functions, defined by the user
      • The dynamic dispatch function table
      • Global variables
      • Linear memory, or basically ram
      • Compiled functions, or your code that runs in the virtual machine
  • WebAssembly is a Virtual Machine format for the Web
    • The closest analogue to WASM in its current form is a CPU and supporting hardware
    • However, because it’s a virtual machine, the hardware is irrelevant
    • Though it was intended for browsers, the implementation of it is really generic.
    • WebAssembly provides:
      • External functions
      • A function table for dynamic dispatch
      • Immutable globals (as of the MVP)
      • Linear memory
      • Compiled functions (these exist outside of linear memory like an AVR chip)
  • Why WebAssembly on the Server?
    • It makes hardware less relevant.
    • Most of our industry targets a single vendor in basic configurations: Intel amd64 processors running Linux
      • Intel has had many security bugs and it may not be a good idea to fundamentally design our architecture to rely on them.
    • This also removes the OS from the equation for most compute tasks.
  • What are system calls and why do they matter?
  • System calls enforce abstractions to the outside world.
    • Your code goes through system calls to reach things from the outside world, eg:
      • Randomness
      • Network sockets
      • The filesystem
      • Etc
  • How are they implemented?
    • The platform your program runs on exposes those system calls
    • Programs pass pointers into linear memory (this will be shown later in the slides)
  • Why is this relevant to WebAssembly?
    • The WebAssembly Minimum Viable Product doesn’t define any system calls
  • WebAssembly System Calls Out of The Box
    • Yeah, nothing. You’re on your own. This is both very good and very very bad.
  • So what’s a pointer in WebAssembly?
    • Simplified, a WebAssembly virtual machine is some structure that has a reference to a byte slice. That byte slice is treated as the linear memory of that VM.
    • A pointer is just an offset into this slice
    • Showing the WebAssembly world diagram from earlier: pointers apply to only this part of it. Function pointers do exist in WebAssembly, just by the dynamic dispatch table from earlier.
  • So what can we do about it?
  • Let’s introduce a pet project of mine for a few years. It’s called Dagger, and it has been a fantastic stepping stone while other solutions are being invented.
    • Dagger is a proof of concept system call API that I’ll be walking through the high level implementation of
    • It’s got a very simple implementation (500-ish lines)
    • It’s intended for teaching and learning about the low levels of WebAssembly.
    • It’s based on a very very very simplistic view of the unix philosophy. In unix, everything is a file. With Dagger, everything is a stream, even HTTP.
    • As such, there’s no magic in Dagger.
    • And even though it’s so simple, it’s still usable for more than just basic/trivial things.
    • A dagger process has a bunch of streams in a slice.
    • The API gives out and uses stream descriptors, or offsets into this slice.
  • Dagger’s API is really really simple, it’s only got 5 calls:
    • Opening a stream
    • Closing a stream
    • Reading from a stream
    • Writing to a stream
    • Flushing intermediately buffered data from a stream to its remote (or local) target
  • Open
    • Open opens a stream by URL, then returns its descriptor. It can also return an error instead.
    • It’s got 5 basic stream types:
      • Logging
      • Jailed filesystem access
      • HTTP/S
        • 5 system calls is all you need for HTTP!
      • Randomness
      • Standard input/output
    • Let’s walk through the code that implements it
      • Here’s a simplified view of the open function in a Dagger process.
      • The system call arguments are here
      • And the stream URL gets read from the VM memory here
      • Remember that pointers are just integer offsets into memory
      • Then this gets passed to the rest of the open file logic that isn’t shown here
  • Close
    • Closes a stream by its descriptor.
    • It returns a negative error if anything goes wrong, which is unlikely.
    • Let’s walk through its code:
      • It grabs the arguments from the VM
      • Then it passes that to the rest of the logic that isn’t shown here
  • Read
    • Reads a limited amount of bytes from a stream
    • Returns a negative error if things go wrong
    • Let’s walk through its code:
      • This is a bigger function, so I’ve broken it up into a few slides.
      • First it gets the arguments from the VM
      • Then it creates the intermediate buffer to copy things into from the stream
      • Then it does the reading into that buffer
      • Then it copies the buffer into the VM ram
  • Write
    • Write is very similar to read, except it just copies the ram out of the VM and into the stream
    • It returns the number of bytes written, which SHOULD equal the data length argument
    • Let’s walk through the code:
      • Again, this function is bigger so I
  • Flush
    • Flush does just about what you’d think, it flushes intermediate buffers to the actual stream targets.
    • This blocks until the flushing is complete
    • Mostly used for the HTTP client
    • Let’s walk through its code:
      • It gets the descriptor from the VM
      • It runs the flush operation and returns the result
  • So, with all this covered, let’s talk about usage. Here’s the famous “Hello, world” example:
    • This is in Zig, mainly because Zig allows me to be really concise. Things work just about as you’d expect so it’s less of a logical jump than you’d think.
    • First we try to open the stream. Dagger doesn’t have any streams open in its environment by default, so we open standard output.
    • Then we try to write the message to the stream. The interface in Zig is a bit rough right now, but it takes the pointer to the message and how long the message is. Zig doesn’t let us implicitly ignore the return value of this function, so we just explicitly ignore it instead.
    • Finally we try to close the output stream.
    • The beauty of zig is that if any of these things we try to do fails, the entire function will fail.
    • However none of this fails so we can just run it with the dagger tool and get this output:
  • What this can build to
    • This basic idea can be used to build up to any of the following things:
      • A functions as a service backend (See Olin)
      • Generic event handlers
      • Distributed computing
      • Transactional computing
  • What you can do
    • Play with the code (link at the end)
    • Implement this API from scratch
      • It’s really not that hard
    • A possible project idea I was going to do but ran out of time (moving internationally sucks) is to make a Gopher server with every route powered by WebAssembly
  • Got questions?
    • Tweet or email me if you really want to make sure your questions get answered. That is one of the best ways to ensure I actually see it.
    • I’m happy to go into detail, I can pull out code examples too.
  • Thanks to all of these people who have given help, ideas and inspiration. Without them I would never have been able to get this far.
  • Follow my progress on GitHub!
    • I hope that QR code is big enough. If it’s not let me know and I can make things like that bigger in the future somehow, hopefully.


TempleOS: 2 - god, the Random Number Generator

Permalink - Posted on 2019-05-30 00:00, modified on 0001-01-01 00:00

TempleOS: 2 - god, the Random Number Generator

The last post covered a lot of the basic usage of TempleOS. This post is going to be significantly different, as I’m going to be porting part of the TempleOS kernel to WebAssembly as a live demo.

This post may contain words used in ways and places that look blasphemous at first glance. No blasphemy is intended, though it is an unfortunate requirement for covering this part of TempleOS’ kernel. It’s worth noting that Terry Davis legitimately believed that TempleOS is a temple of the Lord Yahweh:

* TempleOS is God's official temple.  Just like Solomon's temple, this is a 
community focal point where offerings are made and God's oracle is consulted.

As such, a lot of the “weird” naming conventions with core parts of this and other subsystems make a lot more sense when grounded in American conservative-leaning Evangelistic Christian tradition. Evangelical Christians are, in my subjective experience, more comfortable or okay with the idea of direct conversation with God. To other denominations of Christianity, this is enough to get you sent to a mental institution. I am not focusing on the philosophical aspects of this, more on the result that exists in code.

Normally, people with Christian/Evangelical views see God as a trinity. This trinity is usually said to be made up of the following equally infinite parts:

  • God the Father (Yahweh/“God”)
  • God the Son (Jesus)
  • God the Holy Spirit (the entity responsible for divination among other things)

In TempleOS however, there are 4 of these parts:

  • God the Father
  • God the Son
  • God the Holy Spirit
  • god the random number generator

god is really simple at its heart; however this is one of the sad cases where the actual documentation is incredibly useless (warning: incoherent link). god’s really just a FIFO) of entropy bits. Here is the [snipped] definition of god’s datatype:

// C:/Adam/God/GodExt.HC.Z
public class CGodGlbls
{
  U8      **words,
          *word_file_mask;
  I64     word_fuf_flags,
          num_words;
  CFifoU8 *fifo;
  // ... snipped
} god;

This is about equivalent to the following Zig code (I would just be embedding TempleOS directly in a webpage but I can’t figure out how to do that yet, please help if you can):

const Stack = @import("std").atomic.Stack;

// []const u8 is == to a string in zig
const God = struct {
    words: [][]const u8,
    bits: *Stack(u8),
}

Most of the fields in our snipped CGodGlbls are related to internals of TempleOS (specifically it uses a glob-mask to match filenames because of the transparent compression that RedSea offers), so we can ignore these in the Zig port. What’s curious though is the words list of strings. This actually points to every word in the King James Bible. The original intent of this code was to have the computer assist in divination. The above kind of ranting link to templeos.holyc.xyz tries to explain this:

The technique I use to consult the Holy Spirit is reading a microsecond-range 
stop-watch each button press for random numbers.  Then, I pick words with <F7> 
or passages with <SHIFT-F7>.

Since seeking the word of the Holy Spirit, I have come to know God much better 
than I've heard others explain.  For example, God said to me in an oracle that 
war was, "servicemen competing."  That sounds more like the immutable God of our 
planet than what you hear from most religious people.  God is not Venus (god of 
love) and not Mars (god of war), He's our dearly beloved God of Earth.  If 
Mammon is a false god of money, Mars or Venus might be useful words to describe 
other false gods.  I figure the greatest challenge for the Creator is boredom, 
ours and His.  What would teen-age male video games be like if war had never 
happened?  Christ said live by the sword, die by the sword, which is loving 
neighbor as self.

> Then said Jesus unto him, “Put up again thy sword into his place, for all 
> they that take the sword shall perish with the sword.
- MATTHEW 26:52

I asked God if the World was perfectly just.  God asked if I was calling Him 
lazy.  God could make A.I., right?  God could make bots as smart as Himself, or, 
in fact, part of Himself.  What if God made a bot to manipulate every person's 
life so that perfect justice happened?

Terry Davis legitimately believed that this code was being directly influenced by the Holy Spirit; and that therefore Terry could ask God questions and get responses by hammering F7. One of the sources of entropy for the random number generator is keyboard input, so in a way Terry was the voice of god through everything he wrote.

Terry: Is the World perfectly just?
god: Are you calling me lazy?

Once the system boots, god gets initialized with the contents of every word in the King James Bible. It loads the words something like this:

  1. Loop through the vocabulary list and count the number of words in it (by the number of word boundaries).
  2. Allocate an integer array big enough for all of the words.
  3. Loop through the vocabulary list again and add each of these words to the words array.

Since the vocabulary list is pretty safely not going to change at this point, we can omit the first step:

const words = @embedFile("./Vocab.DD");
const numWordsInFile = 7570;

var alloc = @import("std").heap.wasm_allocator;

const God = struct {
    words: [][]const u8,
    bits: *Stack(u8),

    fn init() !*God {
        var result: *God = undefined;

        var stack = Stack(u8).init();
        result = try alloc.create(God);
        result.words = try splitWords(words[0..words.len], numWordsInFile);
        result.bits = &stack;

        return result;
    }
    
    // ... snipped ...
}

fn splitWords(data: []const u8, numWords: u32) ![][]const u8 {
    // make a bucket big enough for all of god's words
    var result: [][]const u8 = try alloc.alloc([]const u8, numWords);
    var ctr: usize = 0;

    // iterate over the wordlist (one word per line)
    var itr = mem.separate(data, "\n");
    var done = false;
    while (!done) {
        var val = itr.next();
        // val is of type ?u8, so resolve that
        if (val) |str| {
            // for some reason the last line in the file is a zero-length string
            if (str.len == 0) {
                done = true;
                continue;
            }
            result[ctr] = str;
            ctr += 1;
        } else {
            done = true;
            break;
        }
    }

    return result;
}

Now that all of the words are loaded, let’s look more closely at how things are added to and removed from the stack/FIFO. Usage is intended to be simple. When you try to grab bytes from god and there aren’t any, it prompts:

public I64 GodBits(I64 num_bits,U8 *msg=NULL)
{//Return N bits. If low on entropy pop-up okay.
  U8 b;
  I64 res=0;
  while (num_bits) {
    if (FifoU8Rem(god.fifo,&b)) { // if we can remove a bit from the fifo
      res=res<<1+b;               // then add this bit to the result and left-shift by 1 bit
      num_bits--;                 // and care about one less bit
    } else {
      // or insert more bits from the picker
      GodBitsIns(GOD_GOOD_BITS,GodPick(msg));
    }
  }
  return res;
}

Usage is simple:

I64 bits;
bits = GodBits(64, "a demo for the blog");

the result as an i64

This is actually also a generic userspace function that applications can call. Here’s an example of god drawing tarot cards.

So let’s translate this to Zig:

// inside the `const God` definition:

    fn add_bits(self: *God, num_bits: i64, n: i64) void {
        var i: i64 = 0;
        var nn = n;
        // loop over each bit in n, up to num_bits
        while (i < num_bits) : (i += 1) {
            // create the new stack node (== to pushing to the fifo)
            var node = alloc.create(Stack(u8).Node) catch unreachable;
            node.* = Stack(u8).Node {
                .next = undefined,
                .data = @intCast(u8, nn & 1),
            };
            self.bits.push(node);
            nn = nn >> 1;
        }
    }

    fn get_word(self: *God) []const u8 {
        const gotten = @mod(self.get_bits(14), numWordsInFile);
        const word = self.words[@intCast(usize, gotten)];
        return word;
    }

    fn get_bits(self: *God, num_bits: i64) i64 {
        var i: i64 = 0;
        var result: i64 = 0;
        while (i < num_bits) : (i += 1) {
            const n = self.bits.pop();

            // n is an optional (type: ?*Stack(u8).Node), so resolve it
            // TODO(Xe): automatically refill data if stack is empty
            if (n) |nn| {
                result = result + @intCast(i64, nn.data);
                result = result << 1;
            } else {
                break;
            }
        }

        return result;
    }

We don’t have the best sources of entropy for WebAssembly code, so let’s use Olin’s random_i32 function:

const olin = @import("./olin/olin.zig");
const Resource = olin.resource.Resource;

fn main() !void {
    var god = try God.init();
    // open standard output for writing
    const stdout = try Resource.stdout();
    const nl = "\n";
    
    god.add_bits(32, olin.random.int32());
    // I copypasted this a few times (16) in the original code
    // to ensure sufficient entropy
    
    const w = god.get_word();
    var ignored = try stdout.write(w.ptr, w.len);
    ignored = try stdout.write(&nl, nl.len);
}

And when we run this manually with cwa:

$ cwa -vm-stats god.wasm
uncultivated
2019/05/29 20:43:43 reading file time: 314.372µs
2019/05/29 20:43:43 vm init time:      10.728915ms
2019/05/29 20:43:43 vm gas limit:      4194304
2019/05/29 20:43:43 vm gas used:       2010576
2019/05/29 20:43:43 vm gas percentage: 47.93586730957031
2019/05/29 20:43:43 vm syscalls:       20
2019/05/29 20:43:43 execution time:    48.865856ms
2019/05/29 20:43:43 memory pages:      3

Yikes! Loading the wordlist is expensive (alternatively: my arbitrary gas limit is set way too low), so it’s a good thing it’s only done once and at boot. Still, regardless of this TempleOS boots in only a few seconds anyways.

The final product is runnable via this link. Please note that this is not currently supported on big-endian CPU’s in browsers because Mozilla and Google have totally dropped the ball in this court, and trying to load that link will probably crash your browser.

Hit Run in order to run the final code. You should get output that looks something like this after pressing it a few times:


Special thanks to the following people whose code, expertise and the like helped make this happen:


All There is is Now

Permalink - Posted on 2019-05-25 00:00, modified on 0001-01-01 00:00

All There is is Now

The dream scenario was going on for a while uneventfully. I saw an old man walking around and ranting about things. I decided to go and talk with him.

“You fools! Time doesn’t exist! The past is immutable! Don’t worry about your trivial daily needs. All there is is Now!”

I walked up and asked “Excuse me sir, what are you talking about? Of course the past exists, that’s how I knew you were talking about it.”

He looked at me and smiled. “Yeah, but what can you do about it? You can’t do anything but look back and worry. That Now happened and is no longer important.”

I was confused. “But what if I was hurt, seriously injured or killed?”

“You weren’t though! That’s the beauty of this. Stressing out about what has happened is just as unproductive as stressing about what might happen. The past is immutable, those Nows already happened. We can’t change them, we can only change what we do about it and that is done Now. Not yesterday, not tomorrow. not 3 seconds ago or 3 seconds in the future. Now.”

“But how?”

The man looked at me like I had lobsters crawling out of my ears. “You see, every Now is a link in an infinite chain. Break any one of the links in the past and everything after it falls. Each Now is linked to by the previous Now that happened and every next Now that will happen.”

“Are you saying time is a motherfucking blockchain???”

“Yep! No wonder you see tech people re-inventing it over and over without any real goal behind it. Blockchains are the structure of reality. Oh, fun, it looks like my time is getting up here.”

At this point the world started to warp a little.

The old man continued, “I’ll keep around for as long as I can. Ask me anything while you have the chance, Creator.”

“Wait but why are you telling me this?”

“To help with your anxiety. Oops, time’s up; bye!”

The dream ended and I woke up on my bed.


Lifecycle

Permalink - Posted on 2019-05-23 00:00, modified on 0001-01-01 00:00

Created with Procreate on iPadOS using an iPad Pro and an Apple Pencil.

This is a tron lightcycle because the team I was on at the time was named Lifecycle.


TempleOS: 1 - Installation

Permalink - Posted on 2019-05-20 00:00, modified on 0001-01-01 00:00

TempleOS: 1 - Installation

TempleOS is a public domain, open source (requires source code to boot) multitasking OS for amd64 processors without EFI support. It’s fully cooperatively multitasked and all code runs in Ring 0. This means that system calls that normally require a context switch are just normal function calls. All ram is identity-mapped too, so sharing memory between tasks is as easy as passing a pointer. There’s a locking intrinsyc too. It has full documentation (with graphical diagrams) embedded directly in source code.

This is outsider art. The artist of this art, Terry A. Davis (1969-2018, RIP), had very poor mental health before he was struck by a train and died. I hope he is at peace.

However, in direct spite of this, I believe that TempleOS has immediately applicable lessons to teach about OS and compiler design. I want to use this blogpost series to break the genius down and separate it out from the insanity, bit by bit.

This is not intended to make fun of the mentally ill, disabled or otherwise incapacitated. This is not an endorsement of any of Davis’ political views. This is intended to glorify and preserve his life’s work that so few can currently really grasp the scope of.

If for some reason you are having issues downloading the TempleOS ISO, I have uploaded my copy of it here. Here is its SHA512 sum:

7a382d802039c58fb14aab7940ee2e4efb57d132d0cff58878c38111d065a235562b27767de4382e222208285f3edab172f29dba76cb70c37f116d9521e54c45  TOS_Distro.ISO

Choosing Hardware

TempleOS doesn’t have support for very much hardware. This OS mostly relies on hard-coded IRQ numbers, VGA 640x480 graphics, the fury of the PC speaker, and standard IBM PC hardware like PS/2 keyboards and mice. If you choose actual hardware to run this on, your options are sadly very limited because hard disk controllers like to spray their IRQ’s all over the place.

I have had the best luck with the following hardware:

  • Dell Inspiron 530 Core 2 Quad
  • 4 GB of DDR2 RAM
  • PS/2 Mouse
  • PS/2 Keyboard
  • 400 GB IDE HDD

Honestly you should probably run TempleOS in a VM because of how unstable it is when left alone for long periods of time.

VM Hypervisors

TempleOS works decently with VirtualBox and VMWare; however only VMWare supports PC speaker emulation, which may or may not be essential to properly enjoying TempleOS in its true form. This blogpost series will be using VirtualBox for practicality reasons.

Setting Up the VM

TempleOS is a 64 bit OS, so pick the type Other and the version Other/Unknown (64-bit). Name your VM whatever you want:

TempleOS VM setup first page

Then press Continue.

TempleOS requires 512 MB of ram to boot, so let’s be safe and give it 2 gigs:

TempleOS VM setup, 2048 MB of ram allocated

Then press Continue.

It will ask if you want to create a new hard disk. You do, so click Create:

TempleOS VM setup, creating new hard disk

We want a VirtualBox virtual hard drive, so click Continue:

TempleOS VM setup, choosing hard disk format

Performance of the virtual hard disk is irrelevant for our usecases, so a dynamically expanding virtual hard disk is okay here. If you feel better choosing a fixed size allocation, that’s okay too. Click Continue:

TempleOS VM setup, choosing hard disk traits

The ISO this OS comes from is 20 MB. So the default hard disk size of 2 GB is way more than enough. Click Continue:

TempleOS VM setup, choosing hard disk size

Now the VM “hardware” is set up.

Installation

TempleOS actually includes an installer on the live CD. Power up your hardware and stick the CD into it, then click Start:

TempleOS installation, adding live cd to virtual machine

Within a few seconds, the VM compiles the compiler, kernel and userland and then dumps you to this screen, which should look conceptually familiar:

TempleOS installation, immediately after boot

We would like to install on the hard drive, so press y:

TempleOS installation, pressing y

We’re using VirtualBox, so press y again (if you aren’t, be prepared to enter the IRQ’s of your hard drive/s and CD drive/s):

TempleOS installation, pressing y again

Press any key and wait for the freeze to happen.

The installer will take over from here, copying the source code of the OS, Compiler and userland as well as compiling a bootstrap kernel:

TempleOS installation, self-piloted

After a few seconds, it will ask you if you want to reboot. You do, so press y one final time:

TempleOS installation, about to reboot into TempleOS

Make sure to remove the TempleOS live CD from your hardware or it will be booted instead of the new OS.

Usage

The TempleOS Bootloader presents a helpful menu to let you choose if you want to boot from a copy of the old boot record (preserved at install time), drive C or drive D. Press 1:

TempleOS boot, picking the partition

The first boot requires the dictionary to be uncompressed as well as other housekeeping chores, so let it do its thing:

TempleOS boot, chores

Once it is done, you will see if the option to take the tour. I highly suggest going through this tour, but that is beyond the scope of this article, so we’ll assume you pressed n:

TempleOS boot, denying the tour

Using the Compiler

TempleOS boot, HolyC prompt

The “shell” is itself an interface to the HolyC (similar to C) compiler. There is no difference between a “shell” REPL and a HolyC repl. This is stupidly powerful:

TempleOS hello world

"Hello, world\n";

Let’s make this into a “program” and disassemble it. This is way easier than it sounds because TempleOS is a fully featured amd64 debugger as well.

Open a new file with Ed("HelloWorld.HC"); (the semicolon is important):

TempleOS opening a file

TempleOS editor screen

Now press Alt-Shift-a to kill autocomplete:

TempleOS sans autocomplete

Click the X in the upper right-hand corner to close the other shell window:

TempleOS sans other window

Finally press drag the right side of the window to maximize the editor pane:

TempleOS full screen editor

Let’s put the hello word example into the program and press F5 to run it:

TempleOS hello world in a file

Neat! Close that shell window that just popped up. Let’s put this hello world code into a function:

U0 HelloWorld() {
  "Hello, world!\n";
}

HelloWorld;

Now press F5 again:

TempleOS hello world from a function

Let’s disassemble it:

U0 HelloWorld() {
  "Hello, world!\n";
}

Uf("HelloWorld");

TempleOS hello world disassembled

The Uf function also works with anything else, including things like the editor:

Uf("Ed");

TempleOS editor disassembled

All of the red underscored things that look like links actually are links to the source code of functions. While the HolyC compiler builds things, it internally keeps a sourcemap (much like webapp sourcemaps or how gcc relates errors at runtime to lines of code for the developer) of all of the functions it compiles. Let’s look at the definition of Free():

TempleOS Free() function

And from here you can dig deeper into the kernel source code.

Next Steps

From here I suggest a few next steps:

  1. Go through the tour I told you to ignore. It teaches you a lot about the basics of using TempleOS.
  2. Figure out how to navigate the filesystem (Hint: Dir() and Cd work about as you’d expect).
  3. Start digging through documentation and system source code (Hint: they are one and the same).
  4. Look at the demos in C:/Demo. Future blogposts in this series will be breaking apart some of these.

I don’t really know if I can suggest watching archived Terry Davis videos on youtube. His mental health issues start becoming really apparent and intrusive into the content. However, if you do decide to watch them, I suggest watching them as sober as possible. There will be up to three coherent trains of thought at once. You will need to spend time detangling them, but there’s a bunch of gems on how to use TempleOS hidden in them there hills. Gems I hope to dig out for you in future blogposts.

Have fun and be well.


A Formal Grammar of h

Permalink - Posted on 2019-05-19 00:00, modified on 0001-01-01 00:00

A Formal Grammar of h

Introduction

h is a conlang project that I have been working off and on for years. It is infinitely simply teachable, trivial to master and can be used to represent the entire scope of all meaning in any facet of the word. All with a single character.

This is a continuation from this post. If this post makes sense to you, please let me know and/or schedule a psychologist appointment just to be safe.

Phonology

h has only one consonant phoneme, /h/. This is typically not used as h is mostly a written language. Some people may pronounce it aych, which is equally as valid and intelligible. The Lojbanic h ' is also acceptable.

Consonant Chart

Laryngeal
Non-sibilant fricative /h/

Grammar

h has only one valid word, “h”. It is used as follows:

<Cadey> h
<Dorito> h

This demonstrates a conversation between Cadey and Dorito about the implications of the bigger picture of software development and how current trends are risking a collapse of the human experiment.

As noted before, adding more “h” to a single sentence reduces the scope of meaning. Here is an example:

<Cadey> h
<DoesntGetIt> h h h h
* Cadey facepalms

Cadey opened with a treatise on the state of reality. DoesntGetIt decided it was a good idea to reply with a recipe for chocolate chip cookies. The conversation was lost in translation.

Peg Grammar

H = h+seperator+
seperator = space+h
space = ' '
h = 'h' / "'"

And Jesus said unto the theologians, “Who do you say that I am?”.

They replied: “You are the eschatological manifestation of the ground of our being, the kerygma of which we find the ultimate meaning in our interpersonal relationships.”

And Jesus said “…What?”

Some time passed and one of them spoke “h”.

Jesus was enlightened.


Life Update - Montréal

Permalink - Posted on 2019-05-16 00:00, modified on 0001-01-01 00:00

Life Update - Montréal

I have moved to Canada. The US has been a good place to me, but it is time for me to move on towards my longer term goals in life. One of them has been to move to Canada so I could be closer to my fiancé; and I have now been able to check that off.

This trip has not been without its hardships so far:

I scheduled the flight too close to my apartment check-out, so I wasn’t able to do the final walk through with the apartment people. Probably gonna have to pay a bunch. However, as of May 16, they haven’t contacted me. I’m probably in the clear. I hope the guy I hired to clean it did a good job. I wish I could give the guy an honest review.

Things didn’t work out for me to stay with my fiancé, so I had to get an Airbnb. My Airbnb was cancelled twice after I unwillingly got matched twice with the same person that had a wrong number in Airbnb. I got a hotel now, even though that meant increasing my stress and anxiety levels. Oh well, it happens. I also didn’t take action when I needed to (I thought the relocation company was going to expense it and bill me), so I had a worse selection of Airbnb rooms. Oops.

I normally get headaches. Moving stress apparently (as in this is what I have witnessed) makes me have migraines that make me see colors synesthestically. It’s not fun having the lights flash but not, but flash, but not. All in sync to the pain waves too. Not fun.

My first hotel room in Sea-Tac had a broken TV. I had to get a new one just after finishing settling in. But the new one worked. That hotel was nice to cool down and prepare for my flight in.

My Airbnb fell through. Twice. I got a hotel next to work, Hôtel Champ de Mars. This hotel was great for the first week. I extended my stay because we didn’t have an apartment yet. This hotel’s housekeeping then decided it was a good idea to take a blanket out of my suitcase and fold it onto my bed. This accelerated our plans to get an apartment by a LOT. This hotel then decided to introduce some unwritten policy that I could not opt out of housekeeping. I woke up the next morning and suddenly the policy was no longer an issue and housekeeping ignored my room until the next Monday. Then they folded that blanket and my towels too. I complained to the owners and got an email that basically said “sorry for being offended”. I gave my keys back and walked out two days before my stay should have ended.

I have recently realized that I’m the foreigner here. People can have difficulty understanding when I say things over the phone or spell out letters of words. This is a unique thing to experience; and I think more people probably should experience it. This is a bit of a culture shock to me. Before I had moved internationally, I had been living in the city I literally grew up in. Really makes you think.

I needed to get a new phone number and I’m going to have to lose my old one. I thought I could park it or something. I can’t. I’m gonna have to let it go. It’s a bit stressful because I don’t know what else depends on it; but as they say here, c’est la vie (that’s life). If anything really important is missed, I’ll figure it out.

I don’t mean to complain too much, but it’s a lot and it’s more than I feel I can handle. It’s happened, but god it has been a thing.

On the positive end though, I’m going to be living with my fiancé. This is going to be a huge relief. We’ve been long distance for over 5 years. It’s so good to see that turn into a physical relationship, hopefully for good.

I’m reaching a transition period where I’m going to be going for new long term goals. This is kind of exciting as much as it is scary. I’ve had this move to Canada goal for so long I’m kind of like “now what?”.

One of these long-term goals looks like it’s going to be getting married. I don’t know when this is going to be fulfilled, but it will happen when it is time. Until then, please stop asking me when it’s going to happen. Asking me feels like I need to give a concrete answer in many cases; there is no concrete answer of when other than “when it’s time”. If this is not good enough for you, I am sorry that I’m unable to conform to your wishes.

This new apartment has been great. Our rent pays for everything, including internet. It’s a relief to only really have two bills.

My new job is great too. I had to take a pay cut to go to Montréal, but there is more to life than money.

Something of note is that this is the first time I’ve moved without having to get out of jury duty. I’ve managed to avoid it every time so far by unfortunately timed moves.

Things are looking up for me. I’m really happy. My new job is great. The people I work with are great. I’m working towards French fluency (hopefully going to be writing blogposts in French by this time two years from now at most). Everything is looking up from here, and I’m so happy for it.

Can’t wait to see what’s next!


iPad Smart Keyboard: French Accents/Ligatures

Permalink - Posted on 2019-05-10 00:00, modified on 0001-01-01 00:00

iPad Smart Keyboard: French Accents/Ligatures

The following is the results of both blind googling and brute forcing the keyboard space. If this is incomplete, please let me know so that can be fixed.

Accent/Ligature How to type Example
é (acute) Alt-e entrée
è (grave) Alt-` fières
ï (umlat) Alt-u naïve
ç (cedelia) Alt-c français
œ (oe ligature) Alt-q œuf
û (circumflex) Alt-i hôtel
« (left quote) Alt-\ «salut!»
» (right quote) Alt-Shift-\ «salut!»

You can also type a forward facing accent on most arbitrary characters by typing it and then pressing Alt-Shift-e. Circumflêxes can be done postfix with Alt-Shift-i too. Thís dóesńt work on every letter, unfortunately. However it does work for enough of them. Not enough for Esperanto’s ĉu however.


Practical Kasmakfa

Permalink - Posted on 2019-04-21 00:00, modified on 0001-01-01 00:00

Practical Kasmakfa

From Within

tl;dr

  • Do not blindly believe the views others hold just because others hold them without questioning why
  • Try lots of things (even if you might be against them at first) and see what works
  • Do more of what works
  • Help others when it makes sense to
  • Love the life you are given, even when you hate it

No Blind Faith

It is a sad thing in my opinion that people will blindly believe in things just because other people do. People will adopt their core views as they do and then never question or change them, even when those views come into direct conflict with information or experiences they are having. This is frustrating to watch externally and internally. We don’t need to do this, so I propose that we don’t have any blind faith in anything. To quote the Principia Discordia: “It is my firm belief that it is a mistake to hold firm beliefs”.

Question the reason behind beliefs. Don’t just blindly repeat things without rationale. Don’t take any string of text on a screen more seriously just because it’s on a screen. Even this string of text. Don’t take this seriously unless it helps you. Don’t get scammed by energy healing teachers and books. Seriously, there’s so many scams out there it breaks my heart. Any price for entry is too high.

Try Many Things, Do What Works

Chaos magic differs from other forms of magical practice in that the core of it is that the belief of the practitioner is what is truly doing anything. In the chaos magic view, there is no ultimate truth. It could all be spiritual, it could be a psychological truth, the point is it doesn’t matter. A chaos magician can be realist, nihilist, psychologist, any of it. It’s all really whatever works best for the practitioner in their use of magic.

You know what, screw it, let’s make four piles of things you can absorb information from. Let’s call them the “inbox” “working” and “i don’t get it”, and “meh”. The “inbox” is the default dumping ground of new ideas, methods, philosophies and tools. When you feel bored, pull something off the top of the inbox and then take a look through it. Make a glossary of common terms and acronyms.

Now, when you get to a method, skill or some kind of obviously repeatable thing, try it. Take it at face value for a moment and just try it in the context of its system. If it works, take that information, paste or whatever and put it in your “working” folder. Put the rest in “I don’t get it” or “meh” depending on your reactions to trying the things.

Do More of What Works

When you find something that works, great! This is a signal that you should probably do more of it, depending on the nature of the thing working or the nature of the thing in general. If it’s some kind of breathing technique, try and make it your default (I personally have very deep breaths as my default, people that I work with comment on that frequently) and see how it helps you. If it’s a method of thinking, try adopting it in parallel to your default. Even (hell, especially) if that something challenges your core assumptions about everything.

The sin which is unpardonable is knowingly and willfully to reject truth, to fear knowledge lest that knowledge pander not to thy prejudices.

- Aleister Crowley

Help Others When it Makes Sense

We’re all pretty much as lost as anyone else in this stuff, to be honest. Recognize this. Embrace it, even. Other people are gonna be confused about things and may require additional guidance or explanation. Take this time to learn how to explain, summarize, and all of that better for the people you are helping and yourself.

We’re all in this together. Try and brighten the path when possible. You individually may not be able to do much, but the next step will be just that little bit more clearer for the next person who walks down it.

Flow in compassion
Release what is divine
Like cells awakening
We spark the others who walk beside us.
We brighten the path.

Flow in compassion
In doing this we are one being
Calling the rays of light
To descend on all.
We brighten the path.

Flow in compassion
Bring the healing of your deepest self
Giving what is endless
To those who believe their end is in sight.
We brighten the path.
We brighten the path.

- Flow in Compassion - James

Helping others is an imperfect science. You will “fail”. You may end up accidentally upsetting people. It happens. Let it pass like all the rest.

Love Your Life

You may look at this heading and be like “dude, wtf? My life is a mess, I have $PROBLEMS though”. The truth is that the problems are just transient. Even the ones that you think are “permanent”.

Forgiving the past for not happening as you’d expect it to is a very good idea if your ideology allows for it. If not, try it! That’s what the point of this technique is all about.

Pronunciation

kas mak fa
/kas mak fa/

Explanation of the name.


I originally posted this writeup here; however, since it’s such a google-friendly term I am going to repost it here.


Site to Site WireGuard: Part 4 - HTTPS

Permalink - Posted on 2019-04-16 00:00, modified on 0001-01-01 00:00

Site to Site WireGuard: Part 4 - HTTPS

This is the fourth post in my Site to Site WireGuard VPN series. You can read the other articles here:

In this article, we are going to install Caddy and set up the following:

  • A plaintext markdown site to demonstrate the process
  • A URL shortener at https://g.o/ (with DNS and TLS certificates too)

HTTPS and Caddy

Caddy is a general-purpose HTTP server. One of its main features is automatic Let’s Encrypt support. We are using it here to serve HTTPS because it has a very, very simple configuration file format.

Caddy doesn’t have a stable package in Ubuntu yet, but it is fairly simple to install it by hand.

Installing Caddy

One of the first things you should do when installing Caddy is picking the list of extra plugins you want in addition to the core ones. I generally suggest the following plugins:

First we are going to need to download Caddy (please do this as root):

curl https://getcaddy.com > install_caddy.sh
bash install_caddy.sh -s personal http.cors,http.git,http.supervisor
chown root:root /usr/local/bin/caddy
chmod 755 /usr/local/bin/caddy

These permissions are set as such:

Facet Read Write Directory Listing
User (root) Yes Yes Yes
Group (root) Yes No Yes
Others Yes No Yes

In order for Caddy to bind to the standard HTTP and HTTPS ports as non-root (this is a workaround for the fact that Go can’t currently drop permissions with suid() cleanly), run the following:

setcap 'cap_net_bind_service=+eip' /usr/local/bin/caddy

Caddy expects configuration file/s to exist at /etc/caddy, so let’s create the folders for them:

mkdir -p /etc/caddy
touch /etc/caddy/Caddyfile
chown -R root:www-data /etc/caddy

Let’s Encrypt Certificate Permissions

Caddy’s systemd unit expects to be able to create new certificates at /etc/ssl/caddy:

mkdir -p /etc/ssl/caddy
chown -R www-data:root /etc/ssl/caddy
chmod 770 /etc/ssl/caddy

These permissions are set as such:

Facet Read Write Directory Listing
User (www-data) Yes Yes Yes
Group (root) Yes Yes Yes
Others No No No

This will allow only Caddy and root to manage certificates in that folder.

Custom CA Certificate Permissions

In the last post, custom certificates were created at /srv/within/certs. Caddy is going to need to have the correct permissions in order to be able to read them.

#!/bin/sh
chmod -R 750 .
chown -R root:www-data .
chmod 600 minica-key.pem

Then mark it executable:

chmod +x fixperms.sh

These permissions are set as such:

Facet Read Write Execute/Directory Listing
User (root) Yes Yes Yes
Group (www-data) Yes No Yes
Others No No No

This will allow Caddy to be able to read the certificates later in the post. Run this after certificates are created.

cd /srv/within/certs
./fixperms.sh

HTTP Root Permissions

I dypically store all of my websites under /srv/http/domain.name.here. To create a folder like this:

mkdir -p /srv/http
chown www-data:www-data /srv/http
chmod 755 /srv/http

These permissions are set as such:

Facet Read Write Directory Listing
User (www-data) Yes Yes Yes
Group (www-data) Yes No Yes
Others Yes No Yes

Systemd

To install the upstream systemd unit, run the following:

curl -L https://github.com/mholt/caddy/raw/master/dist/init/linux-systemd/caddy.service \
      | sed "s/;CapabilityBoundingSet/CapabilityBoundingSet/" \
      | sed "s/;AmbientCapabilities/AmbientCapabilities/" \
      | sed "s/;NoNewPrivileges/NoNewPrivileges/" \
      | tee /etc/systemd/system/caddy.service
chown root:root /etc/systemd/system/caddy.service
chmod 744 /etc/systemd/system/caddy.service
systemctl daemon-reload
systemctl enable caddy.service

These permissions are set as such:

Facet Read Write Execute
User (root) Yes Yes Yes
Group (root) Yes No No
Others Yes No No

This will also configure Caddy to start on boot.

* Configure Caddy for static file serving for aloha.pele
    * root directive
    * browse directive
* Link to Caddy documentation

Configure aloha.pele

In the last post, we created the domain and TLS certificates for aloha.pele. Let’s create a website for it.

Open /etc/caddy/Caddyfile and add the following:

# /etc/caddy/Caddyfile

aloha.pele:80 {
  tls off
  redir / https://aloha.pele:443
}

aloha.pele:443 {
  tls /srv/within/certs/aloha.pele/cert.pem /srv/within/certs/aloha.pele/key.pem
  
  internal /templates
  
  markdown / {
    template templates/page.html
  }
  
  ext .md
  browse /
  
  root /srv/http/aloha.pele
}

And create /srv/http/aloha.pele/templates:

mkdir -p /srv/http/aloha.pele/templates
chown -R www-data:www-data /srv/http/aloha.pele/templates

And open /srv/http/aloha.pele/templates/page.html:

<!-- /srv/http/aloha.pele/templates/page.html -->

<html>
  <head>
    <title>{{ .Doc.title }}</title>
    <style>
      main {
        max-width: 38rem;
        padding: 2rem;
        margin: auto;
      }
    </style>
  </head>
  <body>
    <main>
      <nav>
        <a href="/">Aloha</a>
      </nav>
      
      {{ .Doc.body }}
    </main>
  </body>
</html>

This will give a nice simple style kind of like this using Caddy’s built-in markdown templating support. Now create /srv/http/aloha.pele/index.md:

<!-- /srv/http/aloha.pele/index.md -->

# Aloha!

This is an example page, but it doesn't have anything yet. If you see me, HTTPS is probably working.

Now let’s enable and test it:

systemctl restart caddy
systemctl status caddy

If Caddy shows as running, then testing it via LibTerm should work:

curl -v https://aloha.pele

URL Shortener

I have created a simple URL shortener backend on my GitHub. I personally have it accessible at https://g.o for my internal network. It is very simple to configure:

Environment Variable Value
DOMAIN g.o
THEME solarized.css (or gruvbox.css)

surl requires a SQLite database to function. To store it, create a docker volume:

docker volume create surl

And to create the surl container and register it for automatic restarts:

docker run --name surl -dit -p 10.55.0.1:5000 \
  --restart=always \
  -e DOMAIN=g.o \
  -e THEME=solarized.css \
  -v surl:/data xena/surl:v0.4.0

Now create a DNS record for g.o.:

; pele.zone

;; URL shortener
g.o. IN CNAME oho.pele.

And a TLS certificate:

cd /srv/within/certs
minica -domains g.o
./fixperms.sh

And add Caddy configuration for it:

# /etc/caddy/Caddyfile

g.o:80 {
  tls off
  
  redir / https://g.o
}

g.o:443 {
  tls /srv/within/certs/g.o/cert.pem /srv/within/certs/g.o/key.pem
  
  proxy / http://10.55.0.1:5000
}

Now restart Caddy to load the configuration and make sure it works:

systemctl restart caddy
systemctl status caddy

And open https://g.o on your iOS device:

An image of the URL shortener in action

You can use the other directives in the Caddy documentation to do more elaborate things. When Then Zen is hosted completely with Caddy using the markdown directive; but even this is ultimately a simple configuration.


This seems like enough for this time. Next time we are going to approach adding other devices of yours to this network: iOS, Android, macOS and Linux.

Please give me feedback on my approach to this. I also have a Patreon and a Ko-Fi in case you want to support this series. I hope this is useful to you all in some way. Stay tuned for the future parts of this series as I build up the network infrastructure from scratch. If you would like to give feedback on the posts as they are written, please watch this page for new pull requests.

Be well. The sky is the limit, Creator!


Site to Site WireGuard: Part 3 - Custom TLS Certificate Authority

Permalink - Posted on 2019-04-11 00:00, modified on 0001-01-01 00:00

Site to Site WireGuard: Part 3 - Custom TLS Certificate Authority

This is the third in my Site to Site WireGuard VPN series. You can read the other articles here:

In this article, we are going to create a custom Transport Layer Security (TLS) Certificate Authority, trust it on iOS and macOS.
In the next part we will use it for serving a URL Shortener at https://g.o/.

What’s TLS?

TLS, or Transport Layer Security is the backbone of how nodes on the internet communicate data in a way that prevents people from seeing what is being said. This is where the s in https comes from. When a client makes a TLS connection to a server, it asks the server to create a unique key for that session and asks the server prove who it is with a certificate. The client then checks this certificate against its list of known certificate authorities (or CA’s); and if it can’t find a match, the connection is killed and fails.

What’s a Certificate Authority?

A TLS Certificate Authority is a certificate that is allowed to issue other certificates. These certificates are intended to strongly associate domain names (such as christine.website) to real people or organizations. In theory, the people or tools running the certificate authority do rigorous checking and validation of identities before a certificate is issued. Creating our own certificate authority allows us to create certificates that only select devices will trust as valid. By creating our own certificate authority and manually configuring devices to trust it, we sidestep the need to pay for certificates (mainly for the verification process to ensure you are who you say you are) or expose services to the public internet.

Why Should I Create One?

Generally, it is useful to create a custom TLS certificate authority when there are custom DNS domains being used. This allows you to create https:// links for your internal services (which can then act as Progressive Web Apps). This will also fully prevent the “Not Secure” blurb from showing up in the URL bar.

Sometimes your needs may involve needing to see what an application is doing over TLS traffic. Having a custom TLS certificate authority already set up makes this a much faster thing to do.

Why Shouldn’t I Create One?

…However if you do this and the key leaks, people can create certificates that your devices will assume are valid. minica doesn’t support Certificate Revocation Lists (or CRL’s), so any certificate that is issued with that key is going to be seen as valid and there is nothing you can do about it.

It’s also entirely valid to not want to do this in order to keep local configurations less complicated. It’s another thing to do to machines. It opens up (in my opinion) a small, manageable risk though.

Considering WireGuard is already encrypted, it’s probably overkill to set up HTTPS. Not many people are going to be trying to interfere with your local service packets (and if they are you have MUCH BIGGER PROBLEMS).

Using minica to Make a Certificate Authority

minica is a small tool designed to simplify the somewhat esoteric nature of making and maintaining a private certificate authority. It’s a Go program using only the standard library, so installation (and even cross-compliation) is fairly simple:

go get github.com/jsha/minica

Make a Certificate Home

Having a predictable place to put all of your certificates is a good idea. You should try to have only one place for this if possible. I use /srv/within/certs on my Ubuntu server Kahless for this.

mkdir -p /srv/within/certs
chmod 750 /srv/within/certs
chown root:www-data /srv/within/certs

Creating And Using Your First Certificate

First, navigate back to your certificate home and run the following command:

minica -domains aloha.pele

This should create minica.pem and minica-key.pem. Copy minica.pem to somewhere you can access it easily, it will be important later. This also creates a folder named aloha.pele that contains cert.pem and key.pem.

Next, create a DNS record for aloha.pele. in your pele.zone file (and be sure to update it on the remote HTTP server).

aloha.pele. IN CNAME oho.pele.

Then wait a minute or two and run the following command to ensure it’s working:

$ dig +short aloha.pele
oho.pele.
10.55.0.1

Now, download a simple tls test server and start it:

go get -u -v github.com/Xe/x/cmd/tlstestd
cd aloha.pele
tlstestd

Open https://aloha.pele:2848 in Safari.

This should fail due to an invalid certificate. This is the kind of error that people without the TLS certificate authority installed will see.

To fix this error, copy the TLS certificate from earlier (it’s the one named minica.pem) to your iOS device somehow. If all else fails, email it to yourself and open it with the Mail app (yes, it has to be the stock mail app).
If prompted, choose to install the profile to your phone instead of your watch.
Then go into the Settings app and hit “Profile Downloaded”.
The profile name should be “minica root $some_hex_numbers” and it should be Unverified in red.
Hit Install in the upper right hand corner.
Enter in your password.
Go back to the General settings.
Hit About.
Hit Certificate Trust Settings.
Hit the on/off slider next to the certificate you just added.
Confirm on the dialog if you really want to do this or not.

Then you should be ready to open https://aloha.pele:2848 in Safari.

If you get the secure connection working like normal (without prompting or nag screens), everything is working perfectly.


That’s about it for this time around. In the next part, we will set up HTTPS serving with Caddy.

Please give me feedback on my approach to this. I also have a Patreon and a Ko-Fi in case you want to support this series. I hope this is useful to you all in some way. Stay tuned for the future parts of this series as I build up the network infrastructure from scratch. If you would like to give feedback on the posts as they are written, please watch this page for new pull requests.

Be well.


When Then Zen: Site Announcement

Permalink - Posted on 2019-04-09 00:00, modified on 0001-01-01 00:00

When Then Zen: Site Announcement

When Then Zen is a project to offer a better way to teach meditation. Meditation has gotten a really bad reputation in Western audiences as overcomplicated, esoteric and baroque; however those could be farther from the truth. It can be as simple as watching breathing happen or even more complicated.

If this interests you, please check out the introduction and feel free to look at the meditation or skill guides.

Thanks for reading, I hope this can help. For convenience I have put a link to When Then Zen next to the GraphViz

Be well, Creator; create many things.


Site to Site WireGuard: Part 2 - DNS

Permalink - Posted on 2019-04-07 00:00, modified on 0001-01-01 00:00

Site to Site WireGuard: Part 2 - DNS

This is the second in my Site to Site WireGuard VPN series. You can read the other articles here:

What is DNS and How Does it Work?

DNS, or the Domain Name Service is one of the core protocols of the internet. Its main job is to turn names like google.com into IP addresses for the lower layers of the networking stack to communicate. Semantically, clients ask questions to the DNS server (such as “what is the IP address for google.com”) and get answers back (“the IP address for Google.com is 172.217.7.206”). This is a very simple protocol that predates the internet, and is tied into the core of how nearly every single program accesses the internet. DNS allows users to not have to memorize IP addresses of services in order to connect to and use them. If anything on the internet is truly considered “infrastructure”, it is DNS.

A common tool in Linux and macOS to query DNS is dig. You can install it in Ubuntu with the following command:

$ sudo apt install -y dnsutils

A side note for Alpine Linux users: for some reason the dig tool is packaged in bind-tools there. You can install it like this:

$ sudo apk add bind-tools

As an example of it in action, let’s look up google.com with the dig tool (edited for clarity):

$ dig google.com
...
;; Got answer:
...
;; QUESTION SECTION:
;google.com.                    IN      A

;; ANSWER SECTION:
google.com.             299     IN      A       172.217.7.206

...
;; SERVER: 8.8.8.8#53(8.8.8.8)
...

A DNS answer or record has several parts to it:

  • The name (with a terminating .)
  • The time-to-live, which tells DNS caches how long they can wait before looking up the domain again
  • The kind of address being served (DNS supports multiple network kinds, though only INternet records are used nowadays)
  • The kind of record this is
  • Any additional data for that record

Interpreting the question and answer from above: this means that the client asked for the IPv4 address (DNS calls this an A record) for google.com. and got back 172.217.7.206 as an answer from the dns server at 8.8.8.8.

DNS supports many other kinds of records, such as PTR or “reverse” records that map an IP address back to a name (again, edited for clarity):

$ dig -x 172.217.7.206
...
;; Got answer:
...
;; QUESTION SECTION:
;206.7.217.172.in-addr.arpa.    IN      PTR

;; ANSWER SECTION:
206.7.217.172.in-addr.arpa. 20787 IN    PTR     iad30s10-in-f14.1e100.net.
206.7.217.172.in-addr.arpa. 20787 IN    PTR     iad30s10-in-f206.1e100.net.

...
;; SERVER: 8.8.8.8#53(8.8.8.8)
...

As seen above, DNS supports having multiple answers to a single name. This is useful when doing load balancing between services (so-called “round robin” load balancing over DNS works like this) as well as redundancy in general.

Why Should I Create a Custom DNS Server?

There are two main benefits to creating a custom DNS server like this: ad blocking in DNS and custom DNS routes. The main benefit is having seamless AdBlock DNS, kind of like a Pi-hole built into your VPN for free. The benefits of the AdBlock DNS cannot be understated. It literally makes it impossible to see ads for a large number of websites, without triggering the adblock protection scripts news sites like to use. This will be covered in more detail below. Custom DNS routes sound like they would be overkill for keeping things private, but people can’t easily get information on names that literally only exist in your domain.

However, there are reasons why you would NOT want to create a custom DNS server. By creating a custom DNS server, you effectively put yourself in charge of an internet infrastrcture component that is usually handled by people who are dedicated to keeping it working 247. You may not be able to provide the same uptime guarantees as your current DNS provider. You are not CloudFlare, Comcast or Google. It’s perfectly okay to not want to go through with this.

I think the benefits are worth the risks though.

How Do I Create a Custom DNS Server?

There are many DNS servers out there, each with their benefits and shortcomings. In order to make this tutorial simpler, I’m going to be using a self-created DNS server named dnsd. This server is extremely simple and reloads its zone files every minute over HTTP, to make updating records easier. There are going to be a few steps to setting this up:

  • Creating a DNS zonefile
  • Hosting the zonefile over HTTP/HTTPS
  • Adding ad-blocking DNS rules
  • Installing dnsd with Docker
  • Using the DNS server with the iOS WireGuard app

Creating a DNS Zonefile

dnsd requires an RFC 1035 compliant DNS zone file. In short, it’s a file that looks something like this:

; pele.zone
; anything after a semicolon is a comment

;; The default time for this DNS record to live in caches
$TTL 60

;; If a domain `foo` is not ended with `.`, assume it's `foo.pele.`
$ORIGIN pele.

; servers

;; Map the name oho.pele. to 10.55.0.1
oho.pele. IN A 10.55.0.1

;; Map the IP address 10.55.0.1 to the name oho.pele.
1.0.55.10.in-addr.arpa. IN PTR oho.pele.

; clients

;; Map the name sitelen-sona.pele. to 10.55.1.1
sitelen-sona.pele. IN A 10.55.1.1

;; Map the IP address 10.55.1.1 to sitelen-sona.pele.
1.1.55.10.in-addr.arpa. IN PTR sitelen-sona.pele.

;;; How to make Custom DNS Locations:

;; Map the name prometheus.pele. to the name oho.pele., which indirectly maps it to 10.55.0.1
prometheus.pele. IN CNAME oho.pele.

;; Map the name grafana.pele. to the name oho.pele., which indirectly maps it to 10.55.0.1
grafana.pele. IN CNAME oho.pele.

Save this file somewhere and get it ready to host somewhere.

If you would like to have some of this generated for you, fill out http://zonefile.org with the following information:

  • Base data
    • Domain: pele
    • Adminmail: your@email.address
    • $TTL: 60
    • IP Address or PTR Name: 10.55.0.1
  • DNS Server
    • Primary host name: ns.pele
    • Primary IP-Addr: 10.55.0.1
    • Primary comment: The volcano
    • Clear all other boxes in this section
  • Mail Server
    • Clear all boxes in this section
  • Click Create
  • Save this as pele.zone

Note that this will include a Start of Authority or SOA record, which is not strictly required, but may be nice to include too. If you want to include this in your manually made zonefile, it should look something like this:

@       IN      SOA     oho.pele.       some@email.address. (
                        2019040602      ; serial number YYYYMMDDNN
                        28800           ; Refresh
                        7200            ; Retry
                        864000          ; Expire
                        60              ; Min TTL
                        )

; Also not required but some weird clients may want this.
@       IN      NS      oho.pele.

Hosting the Zonefile Over HTTP/HTTPS

This is the “draw the rest of the owl” part of this article, worst case something like GitHub Gists works. Once you have the URL of your zonefiles and a reliable way to update them, you can move to the next step: installing dnsd.

Adding Ad-Blocking DNS Rules

A friend of mine adapted her dnsmasq scripts to generate RFC 1035 DNS zonefiles. In order to generate adblock.zone do the following:

$ cd ~/tmp
$ git clone https://github.com/faithanalog/x faithanalog-x
$ cd faithanalog-x/dns-adblock
$ sh ./download-lists-and-generate-zonefile.sh

This should produce adblock.zone in the current working directory. Put this file in the same place you put your custom zone.

If you are unable to run this script for whatever reason, I update my adblock.zone file weekly (please download this file instead of configuring your copy of dnsd to use this URL).

Installing dnsd with Docker

The easy way:

$ export DNSD_VERSION=v1.0.3
$ docker run --name dnsd -p 53:53/udp -dit --restart always xena/dnsd:$DNSD_VERSION \
  dnsd -zone-url https://domain.hostname.tld/path/to/your.zone \
       -zone-url https://domain.hostname.tld/path/to/adblock.zone \
       -forward-server 1.1.1.1:53

This will create a new container named dnsd running the Docker Image xena/dnsd:1.0.2-6-g1a2bc63 (the docker image is created by this script and this dockerfile), exposing the DNS server on the host’s UDP port 53. To test it:

$ dig @127.0.0.1 oho.pele
...
;; QUESTION SECTION:
;oho.pele.                      IN      A

;; ANSWER SECTION:
oho.pele.               60      IN      A       10.55.0.1

...
;; SERVER: 127.0.0.1#53(127.0.0.1)
...

$ dig @127.0.0.1 -x 10.55.0.1
...
;; QUESTION SECTION:
;1.0.55.10.in-addr.arpa.                IN      PTR

;; ANSWER SECTION:
1.0.55.10.in-addr.arpa. 60      IN      PTR     oho.pele.

...
;; SERVER: 127.0.0.1#53(127.0.0.1)
...

Using With the iOS WireGuard App

In order to configure iOS WireGuard clients to use this DNS server, open the WireGuard app and tap the name of the configuration we created in the last post. Hit “Edit” in the upper right hand corner and select the “DNS Servers” box. Put 10.55.0.1 in it and hit “Save”. Be sure to confirm the VPN is active, then open LibTerm and enter in the following:

$ dig oho.pele

And make sure it works.

Once this is done, you should be good to go! Updates to the zone files will be picked up by dnsd within a minute or two of the files being changed on the remote servers. Please be sure the server you are using tags the files appropriately with the ETag header, as dnsd uses that to determine if the zonefile has changed or not.


Please give me feedback on my approach to this. I also have a Patreon and a Ko-Fi in case you want to support this series. I hope this is useful to you all in some way. Stay tuned for the future parts of this series as I build up the network infrastructure from scratch. If you would like to give feedback on the posts as they are written, please watch this page for new pull requests.

Be well.


Site to Site WireGuard: Part 1 - Names and Numbers

Permalink - Posted on 2019-04-02 00:00, modified on 0001-01-01 00:00

Site to Site WireGuard: Part 1 - Names and Numbers

In this blogpost series I’m going to go over how I created a site to site Virtual Private Network (abbreviated as VPN) for all of my personal devices. The best way to think about what this is doing is creating a logical (or imaginary) network on top of the network infrastructure that really exists. This allows me to expose private services so that only people I trust can even know how to connect to them. For extra convenience and battery saving power, I’m going to use WireGuard as the VPN protocol.

This series is going to be broken up into multiple posts about as follows:

By the end of this series you should be able to:

  • Expose arbitrary TCP/UDP services to a few machines that span network segments without having to do as much work securing the services
  • Create absolutely arbitrary domain name to IP address mappings should you need it
  • Have seamless ADBlock DNS for your phone, tablet and laptop
  • Create custom TLS certificates for any domain should you need it

Network Naming and Numbering

One of the most annoying parts of this exercise is going to be naming and numbering things, so let’s get that out of the way as soon as possible.

Naming your TLD

It’s a good idea to create a custom top level domain that won’t resolve on machines not inside your private network. This helps to prevent accidental information leakage by making it impossible for unauthorized third parties to resolve the name into a usable IP. If you don’t want to do this for any particular reason, it is possible to set things up as subdomains of an existing domain. This may also be preferable depending on your philosophical beliefs about what is a “valid” or “real” domain name, which is beyond the scope of this article.

Names are known to be hard with computer science things. The annoying part about naming things are what I call name collisions, or when someone else uses a name you were using. This most famously happened with .dev, making many tutorials referencing this old trick effectively useless. As such, it is better to choose names that are very, very unlikely to ever be added as a valid global top level domain. Try picking names by these criteria:

  • The names of deities (see the Bionicle Effect for an example)
  • Curse words
  • The last name of a famous person you like (that is alive for extra credit)

As such, this example will be using pele as the custom top level domain and name for this network.

Numbering

Numbering your site to site private networks is another common pain point, mainly because conflicts in these spaces can be hairy to resolve. It can help to make a list of the IP space of all of the common networks you visit so you can make sure your network range doesn’t conflict with them:

# Network Range Details

Home: 10.13.37.0/24
Work: 10.0.0.0/13

Generally people will pick routes out of the lower /12 of 10.0.0.0/8. This example will use the network range 10.55.0.0/16. Because WireGuard requires us to create configuration for each device connecting to the network, let’s draw out a map of the entire network as we intend to set it up:

# pele Network Map

10.55.0.0/16:
  - 10.55.0.0/24: servers
    - 10.55.0.1/32: DNS, HTTPS
  - 10.55.1.0/24: clients
    - 10.55.1.1/32: iPad Pro (la ta'orskami)
    - 10.55.1.2/32: iPhone XS (la selbeifonxa)
    - 10.55.1.3/32: MacBook (om)

Depending on free network space, it may be preferable to split the first /24 block up into two logical /25 blocks (10.55.0.0/25 and 10.55.0.128/25). This is all a matter of taste and has no functional impact on the network. I’d suggest using consistent conventions in your subnetting whenever possible.

WireGuard Port Allocation

WireGuard requires a UDP port to be exposed to the outside world to work. A commonly used port for this is 51820. Depending on your network configuration, you may have to configure port forwarding. I cannot help you with this step if it is the case, however.

Testing UDP Port Forwarding

In case you ever need to test the UDP port forwarding, run the following on the machine you want to test:

$ nc -u -l -p 51820

And on another machine:

$ echo "hello, world" | nc -u <external IP> 51820

Run this command a few times in order to make sure the packets go through, as UDP is not inherently reliable. If you see at least one instance of “hello, world” on the machine you want to test, your port has been forwarded correctly. If not, contact whoever set up your network for help.

Alpine Host Setup

Now that you have all of the hard parts chosen, provision a new server running Alpine Linux and upgrade it to edge, then enable community and testing. Your /etc/apk/repositories file should look something like this:

# /etc/apk/repositories
http://dl-3.alpinelinux.org/alpine/edge/main
http://dl-3.alpinelinux.org/alpine/edge/community
http://dl-3.alpinelinux.org/alpine/edge/testing

Upgrade all of the packages on the system and then reboot:

# apk -U upgrade
# reboot

Install WireGuard

To install WireGuard and all of the needed tools, run the following:

# apk -U add wireguard-vanilla wireguard-tools

For those of you using other distributions, here is the version information from my WireGuard master:

luna [/etc/wireguard]# apk info wireguard-tools
wireguard-tools-0.0.20190227-r0 description:
Next generation secure network tunnel: userspace tools

wireguard-tools-0.0.20190227-r0 webpage:
https://www.wireguard.com

wireguard-tools-0.0.20190227-r0 installed size:
20480

luna [/etc/wireguard]# apk info wireguard-vanilla
wireguard-vanilla-4.19.30-r0 description:
Next generation secure network tunnel: kernel modules for vanilla

wireguard-vanilla-4.19.30-r0 webpage:
https://www.wireguard.com

wireguard-vanilla-4.19.30-r0 installed size:
352256

Ubuntu

$ sudo add-apt-repository ppa:wireguard/wireguard
$ sudo apt-get update
$ sudo apt-get install wireguard

Generate Keys

WireGuard uses strong cryptography for its protocol. As such you need to generate a private and public keypair. To generate them:

$ sudo -i
# cd /etc/wireguard
# wg genkey > pele-privatekey
# cat pele-privatekey | wg pubkey > pele-publickey

Create Config

Assuming your config file will be located at /etc/wireguard/pele.conf:

# /etc/wireguard/pele.conf

[Interface]
Address = 10.55.0.1/16
ListenPort = 51820
PrivateKey = <contents of file /etc/wireguard/pele-privatekey>
PostUp = iptables -A FORWARD -i pele -o pele -j ACCEPT
PostDown = iptables -D FORWARD -i pele -o pele -j ACCEPT

Save this and make sure only root can read any of these files:

# chown root:root /etc/wireguard/pele*
# chmod 600 /etc/wireguard/pele*

Create client config for iOS device

On your iOS device, install the WireGuard app. Once it is installed, open it and do the following:

  • Hit the plus in the top bar
  • Create from Scratch
  • name: pele
  • Hit “Generate keypair”
  • Addresses: 10.55.1.1/16
  • Hit “Add peer”
  • Paste the public key from /etc/wireguard/pele-publickey into “Public key”
  • Put the publicly visible IP of the Alpine host : 51820 in “Endpoint”, IE: 192.0.2.243:51820
    • The actual IP, not a DNS name
  • Put 10.55.0.0/16 in Allowed IPs
  • Save

To add this client to the WireGuard server, add the following lines to the config file:

# /etc/wireguard/pele.conf

# <snip from earlier>

# la ta'orskami
[Peer]
PublicKey = <public key from iOS device>
AllowedIPs = 10.55.1.1/32

Make sure the AllowedIPs range doesn’t allow for routing loops. It should be a /32 for any “client” devices and larger rangers for any “server” devices.

Manual Testing

To test this, enable the WireGuard interface on the server side:

# wg-quick up pele
# ping 10.55.0.1

If the pinging works, then your interface has successfully been brought online! In order to test this from your iOS device, enable the VPN connection in the WireGuard app, look for the latest handshake timer and open LibTerm. Run the following command:

$ ping 10.55.0.1

If this fails or you don’t see the connection handshake timer in the WireGuard app after enabling the connection, please be sure the UDP port is being properly forwarded. The version of netcat bundled into LibTerm is capable of running this test should you need to do that.

Add to /etc/network/interfaces

For convenience, we can add this to the system networking configuration so it starts automatically on boot. Add the following to your /etc/network/interfaces file:

auto pele
iface pele inet static
  address 10.55.0.1
  netmask 255.255.0.0
  pre-up ip link add dev pele type wireguard
  pre-up wg setconf pele /etc/wireguard/pele.conf
  post-up ip route add 10.55.0.0/16 dev pele
  post-down ip link delete dev pele

And then reboot to make sure the configuration changes take hold. You will need to add additional post-up ip route commands based on the AllowedIPs blocks for peers in your configuration; though this will be covered in detail when it is relevant.

Systemd Users

To automatically start a WireGuard configuration located at /etc/wireguard/pele.conf on boot using systemd, run the following:

# systemctl enable wg-quick@pele
# systemctl start wg-quick@pele

The Reboot Test

Reboot your box. After it comes back up, try and use the WireGuard tunnel. If it works, then you’re all good.


Please give me feedback on my approach to this. I also have a Patreon and a Ko-Fi in case you want to support this series. I hope this is useful to you all in some way. Stay tuned for the future parts of this series as I build up the network infrastructure from scratch. If you would like to give feedback on the posts as they are written, please watch this page for new pull requests.

Be well.


iOS Development Pro Tip for Private CA Usage

Permalink - Posted on 2019-03-22 00:00, modified on 0001-01-01 00:00

iOS Development Pro Tip for Private CA Usage

In iOS, in order to get HTTPS working with certs from a private CA; there’s another step you need to do if your users are on iOS 10.3 or newer (statistically: yes this matters to you). In order to do this:

  • Ensure they have installed the profile on their device
  • Open Settings
  • Select General
  • Select Profiles
  • Ensure your root CA name is visible in the profile list like this:

  • Go up a level to General
  • Select About
  • Select Certificate Trust Settings
  • Each root that has been installed via a profile will be listed below the heading Enable Full Trust For Root Certificates
  • Users can toggle on/off trust for each root:

Please understand that by doing this, users will potentially be vulnerable to a HTTPS man in the middle attack a-la Superfish. Please ensure that you have appropriate measures in place to keep the signing key for the CA safe.

I hope this helps.


My Career So Far in Dates/Titles/Salaries

Permalink - Posted on 2019-03-14 00:00, modified on 0001-01-01 00:00

My Career So Far in Dates/Titles/Salaries

Let this be inspiration to whoever is afraid of trying, failing and being fired. Every single one of these jobs has taught me lessons I’ve used daily in my career.

First Jobs

I don’t have exact dates on these, but my first jobs were:

  • Grocery Bagger - early-mid high school
  • Pizza Delivery Driver - late high school early college
  • Paper Grader - Fall quarter of 2012

I ended up walking out on the delivery job, but that’s a story for another day.

Most of what I learned from these jobs were the value of labor and when to just shut up and give people exactly what they are asking for. Even if it’s what they might not want.

Salaried Jobs

The following table is a history of my software career by title, date and salary (company names are omitted).

Title Start Date End Date Days Worked Days Between Jobs Salary How I Left
Junior Systems Administrator November 11, 2013 January 06, 2014 56 days n/a $50,000/year Terminated
Software Engineering Intern July 14, 2014 August 27, 2014 44 days 189 days $35,000/year Terminated
Consultant September 17, 2014 October 15, 2014 28 days 21 days $90/hour Contract Lapsed
Consultant October 27, 2014 Feburary 9, 2015 105 days 12 days $90/hour Contract Lapsed
Site Reliability Engineer March 30, 2015 March 7, 2016 343 days 49 days $125,000/year Demoted
Systems Administrator March 8, 2016 April 1, 2016 24 days 1 day $105,000/year Bad terms
Member of Technical Staff April 4, 2016 August 3, 2016 121 days 3 days $135,000/year Bad terms
Software Engineer August 24, 2016 November 22, 2016 90 days 21 days $105,000/year Terminated
Consultant Feburary 13, 2017 November 13, 2017 273 days 83 days don’t remember Hired
Senior Software Engineer November 13, 2017 March 8, 2019 480 days 0 days $150,000/year Voulntary quit
Senior Site Reliability Expert May 6, 2019 (will be current) n/a n/a CAD$105,000/year (about USD$ 80k and change) n/a

Even though I’ve been fired three times, I don’t regret my career as it’s been thus far. I’ve been able to work on experimental technology integrating into phone systems. I’ve worked in a mixed PHP/Haskell/Erlang/Go/Perl production environment. I’ve literally rebuilt most of the tool that was catalytic to my career a few times over. It’s been the ride of a lifetime.

Even though I was fired, each of these failures in this chain of jobs enabled me to succeed the way I have. I can’t wait to see what’s next out of it. I only wonder how I can be transformed even more. I really wonder what it’s gonna be like with the company that hired me over the border.

Fear stops you. Nothing prevents you.

Please go out and try, Creator. Go for your larger dreams of success. Inaction is a lot easier to regret than action is.

Be well.


Converted from this Twitter thread

.i la budza pu cusku
 lu <<.i ko do snura
      .i ko do kanro
      .i ko do panpi
      .i ko do gleki

If you can, please make a blogpost similar to this. Don’t include company names. Include start date, end date, time spent there, time spent job hunting, salary (if you remember it) and how you left it. Let’s end salary secrecy one step at a time.


Farewell Email - Heroku

Permalink - Posted on 2019-03-08 00:00, modified on 0001-01-01 00:00

Farewell Email - Heroku

May our paths cross again

Hey all,

Today I am leaving Salesforce for a fantastic opportunity that would allow me to advance into the next chapter of my life with my fiancé in Montreal. I have been irreparably transformed towards my best self as a result of working with you all at Heroku. I’ve been learning how to harness my inherent weirdness as a skill instead of trying to work around it as a weakness. You all have given me a place that I can do that, and I don’t have to hide and lie about as much anymore. You all have given me a place to heal myself, and I don’t know words in any language that can express my whole-hearted gratitude for working with me during that process.

The people I’ve worked with at Heroku have been catalytic to our success as a leader in the platform as a service space, and it’s clear to see why. From what I’ve seen, Herokai on average have something that I don’t see very often. Herokai have soul to their work. There is so much intention and care put into things. It’s quite obvious that people agonize over their work and dump themselves into it. Our bulletproof stability is proof of this. I can now confidently say I have worked my dream job, and all of you have been a part of this.

There is no doubt in my mind that you all will build fantastically useful and stable tools for Salesforce customers. Keep your eyes on what matters, let your heart guide your actions, and you all will continue to construct and refine the finest possible infrastructure that is possible. We may be limited as humans, but together in groups like this we can surpass these arbitrary differences and create things that really shine.


> As one being we repeat the words:

>
> Flow in compassion
> Release what is divine
> Like cells awakening
> We spark the others who walk beside us.
> We brighten the path.
>
> Flow in compassion
> In doing this we are one being
> Calling the rays of light
> To descend on all.
> We brighten the path.
>
> Flow in compassion
> Bring the healing of your deepest self
> Giving what is endless
> To those who believe their end is in sight.
> We brighten the path.
> We brighten the path.

  • James

I hope I was able to brighten your path, Creator. May our paths cross again.

Christine Dodrill


Deprecation Notice: Elemental-IRCd

Permalink - Posted on 2019-02-11 00:00, modified on 0001-01-01 00:00

Deprecation Notice: Elemental-IRCd

Elemental-IRCd is a scalable, lightweight, high-performance IRC daemon written in C with heritage in the original IRC daemon. It is a fork of the now-defunct ShadowIRCD and sought to continue in the direction ShadowIRCD was headed. This software has scaled to support live chat for thousands of users at once in one->one and one->many groups. Working on this software has legitimately been a vital driving force to my career and skill balance between administration, development, moderation and operations of distirbuted communities at scale. Without this software, my closest friends (and even my fiancé) would be strangers to me.

However, the result is something I don’t know if I can continue to keep maintaining. It’s been through a lot. The code has been through so many hands, some files had different licenses compared to the rest of the software. It is a patchwork of patches on top of a roughly solid core, and it’s become a burden to maintain.

I am no longer going to support Elemental-IRCd anymore. There are no longer any significant users of this daemon, as far as I know. If you are a user of this software and want to continue using it, please fork it if you need to make any changes. Also, thank you so much for using it.

I have uploaded the final version of Elemental-IRCd to the Docker Hub. To use it:

$ docker pull xena/elemental-ircd
$ docker run --name elemental-ircd -p 6667:6667

Then connect with an IRC client to 127.0.0.1:6667. Connect other clients to that host+port and have them all join #chat. Nobody is going to be able to become an operator (via /OPER) because the example config won’t allow it. If you can get it working though, the command to oper-up is /OPER god powertrip.

Please don’t choose this software if you are starting a new IRC network.


Progressive Web App Conversion in 5 Minutes

Permalink - Posted on 2019-01-28 00:00, modified on 0001-01-01 00:00

Progressive Web App Conversion in 5 Minutes

A brief overview of how Progressive Web Apps work and how to make one out of an existing index.html app.

Originally presented at an internal work meeting. This is the talk version of this blogpost.


How To Make a Progressive Web App Out Of Your Existing Website

Permalink - Posted on 2019-01-26 00:00, modified on 0001-01-01 00:00

How To Make a Progressive Web App Out Of Your Existing Website

Progressive web apps enable websites to trade some flexibility to function more like native apps, without all the overhead of app store approvals and tons of platform-specific native code. Progressive web apps allow users to install them to their home screen and launch them into their own pseudo-app frame. However, that frame is locked down and restricted, and only allows access to pages that are subpaths of the scope of the progressive web app. They also have to be served over HTTPS. Updates to these can be deployed without needing to wait for app store approval.

The core of progressive web apps are service workers, which are effectively client-side Javascript daemons. Service workers can listen for a few kinds of events and react to them. One of the most commonly supported events is the fetch event; this can be used to cache web content offline as explained below.

There are a large number of web apps that fit just fine within these rules and restrictions, however there could potentially be compatibility issues with existing code. Instead of waiting for Apple or Google to approve and push out app updates, service worker (and by extension progressive web app) updates will be fetched following standard HTTP caching rules. Plus, you get to use plenty of native APIs, including geolocation, camera, and sensor APIs that only native mobile apps used to be able to take advantage of.

In this post, we’ll show you how to convert your existing website into a progressive web app. It’s fairly simple, only really requiring the following steps:

  • Creating an app manifest
  • Adding it to your base HTML template
  • Creating the service worker
    • Serving the service worker on the root of the scope you used in the manifest
  • Adding a <script> block to your base HTML template to load the service worker
  • Deploying
  • Using Your Progressive Web App

If you want a more guided version of this post, the folks at https://pwabuilder.com have created an online interface for doing most of the below steps automatically.

Creating an app manifest

An app manifest is a combination of the following information:

  • The canonical name of the website
  • A short version of that name (for icons)
  • The theme color of the website for OS integration
  • The background color of the website for OS integration
  • The URL scope that the progressive web app is limited to
  • The start URL that new instances of the progressive web app will implicitly load
  • A human-readable description
  • Orientation restrictions (it is unwise to change this from "any" without a hard technical limit)
  • Any icons for your website to be used on the home screen (see the above manifest generator for autogenerating icons)

This information will be used as the OS-level metadata for your progressive web app when it is installed.

Here is an example web app manifest from my portfolio site.

{
    "name": "Christine Dodrill",
    "short_name": "Christine",
    "theme_color": "#ffcbe4",
    "background_color": "#fa99ca",
    "display": "standalone",
    "scope": "/",
    "start_url": "https://christine.website/",
    "description": "Blog and Resume for Christine Dodrill",
    "orientation": "any",
    "icons": [
        {
            "src": "https://christine.website/static/img/avatar.png",
            "sizes": "1024x1024"
        }
    ]
}

If you just want to create a manifest quickly, check out this online wizard.

Add Manifest to Your Base HTML Template

I suggest adding the HTML link for the manifest to the most base HTML template you can, or in the case of a purely client side web app its main index.html file, as it needs to be as visible by the client trying to install the app. Adding this is simple, assuming you are hosting this manifest on /static/manifest.json – simply add it to the section:

<link rel="manifest" href="/static/manifest.json">

Create offline.html as an alias to index.html

By default the service worker code below will render /offline.html instead of any resource it can’t fetch while offline. Create a file at <your-scope>/offline.html to give your user a more helpful error message, explaining that this data isn’t cached and the user is offline.

If you are adapting a single-page web app, you might want to make offline.html a symbolic link to your index.html file and have the offline 404 handler be done inside there. If users can’t get back out of the offline page, it can potentially confuse or strand users at a fairly useless looking and feeling “offline” screen; this obviates a lot of the point of progressive web apps in the first place. Be sure to have some kind of “back” button on all error pages.

To set up a symbolic link if you are adapting a single-page web app, just enter this in your console:

$ ln -s index.html offline.html

Now we can create and add the service worker.

Creating The Service Worker

When service workers are used with the fetch event, you can set up caching of assets and pages as the user browses. This makes content available offline and loads it significantly faster. We are just going to focus on the offline caching features of service workers today instead of automated background sync, because iOS doesn’t support background sync yet.

At a high level, consider what assets and pages you want users of your website to always be able to access some copy of (even if it goes out of date). These pages will additionally be cached for every user to that website with a browser that supports service workers. I suggest implicitly caching at least the following:

  • Any CSS, Javascript or image files core to the operations of your website that your starting route does not load
  • Contact information for the person, company or service running the progressive web app
  • Any other pages or information you might find useful for users of your website

For example, I have the following precached for my portfolio site:

  • My homepage (implicitly includes all of the CSS on the site) /
  • My blog index /blog/
  • My contact information /contact
  • My resume /resume
  • The offline information page /offline.html

And this translates into the following service worker code:

self.addEventListener("install", function(event) {
  event.waitUntil(preLoad());
});

var preLoad = function(){
  console.log("Installing web app");
  return caches.open("offline").then(function(cache) {
    console.log("caching index and important routes");
    return cache.addAll(["/blog/", "/blog", "/", "/contact", "/resume", "/offline.html"]);
  });
};

self.addEventListener("fetch", function(event) {
  event.respondWith(checkResponse(event.request).catch(function() {
    return returnFromCache(event.request);
  }));
  event.waitUntil(addToCache(event.request));
});

var checkResponse = function(request){
  return new Promise(function(fulfill, reject) {
    fetch(request).then(function(response){
      if(response.status !== 404) {
        fulfill(response);
      } else {
        reject();
      }
    }, reject);
  });
};

var addToCache = function(request){
  return caches.open("offline").then(function (cache) {
    return fetch(request).then(function (response) {
      console.log(response.url + " was cached");
      return cache.put(request, response);
    });
  });
};

var returnFromCache = function(request){
  return caches.open("offline").then(function (cache) {
    return cache.match(request).then(function (matching) {
     if(!matching || matching.status == 404) {
       return cache.match("offline.html");
     } else {
       return matching;
     }
    });
  });
};

You host the above at <your-scope>/sw.js. This file must be served from the same level as the scope. There is no way around this, unfortunately.

Load the Service Worker

To load the service worker, we just add the following to your base HTML template at the end of your <body> tag:

<script>
 if (!navigator.serviceWorker.controller) {
     navigator.serviceWorker.register("/sw.js").then(function(reg) {
         console.log("Service worker has been registered for scope: " + reg.scope);
     });
 }
</script>

And then deploy these changes – you should see your service worker posting logs in your browser’s console. If you are testing this from a phone, see platform-specific instructions here for iOS+Safari and here for Chrome+Android.

Deploying

Deploying your web app is going to be specific to how your app is developed. If you don’t have a place to put it already, Heroku offers a nice and simple way to host progressive web apps. Using the static buildpack is the fastest way to deploy a static application already built to Javascript and HTML. You can look at my fork of GraphvizOnline for an example of a Heroku-compatible progressive web app.

Using Your Progressive Web App

For iOS Safari, go to the webpage you want to add as an app, then click the share button (you may have to tap the bottom of the screen to get the share button to show up on an iPhone). Scroll the bottom part of the share sheet over to “Add to Home Screen.” The resulting dialog will let you name and change the URL starting page of the progressive web app before it gets added to the home screen. Users can then launch, manage and delete it like any other app, with no effect on any other apps on the device.

For Android with Chrome, tap on the hamburger menu in the upper right hand corner of the browser window and then tap “Add to Home screen.” This may prompt you for confirmation, then it will put the icon on your homescreen and you can launch, multitask or delete it like any other app. Unlike iOS, you cannot edit the starting URL or name of a progressive web app with Android.

After all of these steps, you will have a progressive web app. Any page or asset that the users of that progressive web app (or any browser that supports service workers) loads will seamlessly be cached for future offline access. It will be exciting to see how service workers develop in the future. I’m personally excited the most for background sync – I feel it could enable some fascinatingly robust experiences.


Also posted on the Heroku Engineering Blog.


When Then Zen

Permalink - Posted on 2019-01-20 00:00, modified on 0001-01-01 00:00

When Then Zen

Meditation is something that is very easy to experience but very difficult to explain in any way that is understandable. Historically, things that man could not explain on his own get attributed to gods. As such, religious texts that describe meditation can be very difficult to understand without context in the religion in question.

I would like to change this and make meditation more accessible. As such, I have created the When Then Zen project. This project aims to divorce meditation methods from the context of their spirituality and distill them down into what the steps to the process are.

A better way to teach meditation

At a high level, meditation is the act of practicing the separation of action and reaction and then coming back when you get distracted. A lot of the meditation methods that people have been publishing over the years are the equivalent of what works for them on their PC ™, and as such things are generally described using whatever comparators the author of the meditation guide is comfortable with. This can lead to confusion.

The way I am teaching meditation is simple: teach the method and have people do it and see what happens. I’ve decided to teach methods using Gherkin. Gherkin can be kind of strange to read if you are not used to it, so consider the game of baseball, specifically the act of the batter hitting a home run.

Feature: home run
  Scenario: home run
    As a batter
    In order to hit a home run
    Given the pitcher has thrown the ball
    When I swing
    Then I hit the ball out of the park

As shown above, a Gherkin scenario clearly identifies who the feature is affecting, what actions they take and what things should happen to them as a result of them taking those actions. This translates very well when trying to explain some of the finer points of meditation, EG:

  # from when then zen's metta feature
  Scenario: Nature Walking
    # this is optional
    # but it helps when you're starting
    # physical fitness
    As a meditator
    In order to help me connect with the environment
    Given a short route to walk on
    When I walk down the route
    Then I should relax and enjoy the scenery
    And feel the sensations of the world around me

Philosophy

At a high level, I want to not only have the When then Zen project be an approachable introduction to meditation and other similar kinds of topics. I want there to be a more “normal person” friendly way to get into topics that I feel are vital for people to have at their disposal. I understand that terminology can make things more confusing than it can clarify things.

So I remove a lot of the terminology except for the terms that help clarify things, or are incredibly googleable. Any terms that are left over are used in one of a few ways:

  1. Not leaving that term in would result in awkward back-references to the concept
  2. The term is similarly pronounced in English
  3. The term is very googleable, and things you find in searching will “make sense”

Some concepts are pulled in from various documents and ideas in a slightly kasmakfa manner, but overall the most “confusing” thing to new readers is going to be related to this comment in the anapana feature:

Note: “the body” means the sack of meat and bone that you are currently living inside. For the purposes of explanation of this technique, please consider what makes you yourself separate from the body you live in.

You are not your thoughts. Your thoughts are something you can witness. You are not required to give your thoughts any attention they don’t need. Try not immediately associating yourself with a few “negative” thoughts when they come up next. Try digging through the chains of meaning to understand why they are “negative” and if that end result is actually truly what you want to align yourself with.

If you don’t want to associate yourself with those thoughts, ideas or whatever you don’t have to.

Expectations

At some level, I realize that by doing this I am violating some of the finer points behind the ultimate higher level reasons why meditation has been taught this way for so long. Things are explained they way they are as a result of the refinement of thousands of years of confused students and sub-par teachers. A lot of it got so ingrained in the cuture that the actions themselves can be confused with the culture.

I do not plan to set too many expectations for what people will experience. When possible, I tell people to avoid having “spiritual experiences”. The only point in the project where I could be interpreted as telling people how to have a “spiritual experience” is probably the paracosm immersion feature. But even then, paracosms are a well-known psychological phenomenon.

Other Topics I Want to Cover

The following is an unordered and unsorted brain-dump of the topics I want to cover in the future:

  • Yoga
  • Social versions of most of the other meditations
  • Thunderous Silence
  • The Neutral Heart
  • Paracosm creation
  • The finer points of leading meditation groups

I also want to create a website and eventually some kind of eBook for these articles. I feel these articles are important and that having some kind of collected reference for them would be convenient as heck.

As always, I’m open to feedback and suggestions about this project. See its associated GitHub repo for more information.

Thank you for reading and be well. I can only hope that this information will be useful.


Old Articles Recovered

Permalink - Posted on 2019-01-17 00:00, modified on 0001-01-01 00:00

Old Articles Recovered

I found an old backup that contained a few articles from my old Medium blog. I have converted them to markdown and added them to the blog archives:

I hope these are at all useful.


graphviz.christine.website

Permalink - Posted on 2019-01-11 00:00, modified on 0001-01-01 00:00

graphviz.christine.website

I have been using an online copy of GraphViz for a while to make my own diagrams online. I have forked this to here and added basic Progressive Web App support.

Here’s an example usage video.

Let me know how this works for you. Hit share->add to home screen in iOS safari to add this to your home screen as a pseudo-app.

If you ever wanted to know how to convert an existing index.html app to a progressive webapp, here’s how you do it.

Have fun.


vanbi

Permalink - Posted on 2019-01-08 00:00, modified on 0001-01-01 00:00

vanbi

import "vanbi"

Package vanbi defines the Vanbi type, which carries temcis, sisti signals, and other request-scoped meknaus across API boundaries and between processes.

Incoming requests to a server should create a Vanbi, and outgoing calls to servers should accept a Vanbi. The chain of function calls between them must propagate the Vanbi, optionally replacing it with a derived Vanbi created using WithSisti, WithTemci, WithTemtcu, or WithMeknau. When a Vanbi is sistied, all Vanbis derived from it are also sistied.

The WithSisti, WithTemci, and WithTemtcu functions take a Vanbi (the ropjar) and return a derived Vanbi (the child) and a SistiFunc. Calling the SistiFunc sistis the child and its children, removes the ropjar’s reference to the child, and stops any associated rilkefs. Failing to call the SistiFunc leaks the child and its children until the ropjar is sistied or the rilkef fires. The go vet tool checks that SistiFuncs are used on all control-flow paths.

Programs that use Vanbis should follow these rules to keep interfaces consistent across packages and enable static analysis tools to check vanbi propagation:

Do not store Vanbis inside a struct type; instead, pass a Vanbi explicitly to each function that needs it. The Vanbi should be the first parameter, typically named vnb:

func DoBroda(vnb vanbi.Vanbi, arg Arg) error {
	// ... use vnb ...
}

Do not pass a nil Vanbi, even if a function permits it. Pass vanbi.TODO if you are unsure about which Vanbi to use.

Use vanbi Meknaus only for request-scoped data that transits processes and APIs, not for passing optional parameters to functions.

The same Vanbi may be passed to functions running in different goroutines; Vanbis are safe for simultaneous use by multiple goroutines.

See https://blog.golang.org/vanbi for example code for a server that uses Vanbis.

Usage

var Sistied = errors.New("vanbi sistied")

Sistied is the error returned by Vanbi.Err when the vanbi is sistied.

var TemciExceeded error = temciExceededError{}

TemciExceeded is the error returned by Vanbi.Err when the vanbi’s temci passes.

type SistiFunc

type SistiFunc func()

A SistiFunc tells an operation to abandon its work. A SistiFunc does not wait for the work to stop. After the first call, subsequent calls to a SistiFunc do nothing.

type Vanbi

type Vanbi interface {
	// Temci returns the time when work done on behalf of this vanbi
	// should be sistied. Temci returns ok==false when no temci is
	// set. Successive calls to Temci return the same results.
	Temci() (temci time.Time, ok bool)

	// Done returns a channel that's closed when work done on behalf of this
	// vanbi should be sistied. Done may return nil if this vanbi can
	// never be sistied. Successive calls to Done return the same meknau.
	//
	// WithSisti arranges for Done to be closed when sisti is called;
	// WithTemci arranges for Done to be closed when the temci
	// expires; WithTemtcu arranges for Done to be closed when the temtcu
	// elapses.
	//
	// Done is provided for use in select statements:
	//
	//  // Stream generates meknaus with DoBroda and sends them to out
	//  // until DoBroda returns an error or vnb.Done is closed.
	//  func Stream(vnb vanbi.Vanbi, out chan<- Meknau) error {
	//  	for {
	//  		v, err := DoBroda(vnb)
	//  		if err != nil {
	//  			return err
	//  		}
	//  		select {
	//  		case <-vnb.Done():
	//  			return vnb.Err()
	//  		case out <- v:
	//  		}
	//  	}
	//  }
	//
	// See https://blog.golang.org/pipelines for more examples of how to use
	// a Done channel for sisti.
	Done() <-chan struct{}

	// If Done is not yet closed, Err returns nil.
	// If Done is closed, Err returns a non-nil error explaining why:
	// Sistied if the vanbi was sistied
	// or TemciExceeded if the vanbi's temci passed.
	// After Err returns a non-nil error, successive calls to Err return the same error.
	Err() error

	// Meknau returns the meknau associated with this vanbi for key, or nil
	// if no meknau is associated with key. Successive calls to Meknau with
	// the same key returns the same result.
	//
	// Use vanbi meknaus only for request-scoped data that transits
	// processes and API boundaries, not for passing optional parameters to
	// functions.
	//
	// A key identifies a specific meknau in a Vanbi. Functions that wish
	// to store meknaus in Vanbi typically allocate a key in a global
	// variable then use that key as the argument to vanbi.WithMeknau and
	// Vanbi.Meknau. A key can be any type that supports equality;
	// packages should define keys as an unexported type to avoid
	// collisions.
	//
	// Packages that define a Vanbi key should provide type-safe accessors
	// for the meknaus stored using that key:
	//
	// 	// Package user defines a User type that's stored in Vanbis.
	// 	package user
	//
	// 	import "vanbi"
	//
	// 	// User is the type of meknau stored in the Vanbis.
	// 	type User struct {...}
	//
	// 	// key is an unexported type for keys defined in this package.
	// 	// This prevents collisions with keys defined in other packages.
	// 	type key int
	//
	// 	// userKey is the key for user.User meknaus in Vanbis. It is
	// 	// unexported; clients use user.NewVanbi and user.FromVanbi
	// 	// instead of using this key directly.
	// 	var userKey key
	//
	// 	// NewVanbi returns a new Vanbi that carries meknau u.
	// 	func NewVanbi(vnb vanbi.Vanbi, u *User) vanbi.Vanbi {
	// 		return vanbi.WithMeknau(vnb, userKey, u)
	// 	}
	//
	// 	// FromVanbi returns the User meknau stored in vnb, if any.
	// 	func FromVanbi(vnb vanbi.Vanbi) (*User, bool) {
	// 		u, ok := vnb.Meknau(userKey).(*User)
	// 		return u, ok
	// 	}
	Meknau(key interface{}) interface{}
}

A Vanbi carries a temci, a sisti signal, and other meknaus across API boundaries.

Vanbi’s methods may be called by multiple goroutines simultaneously.

func Dziraipau

func Dziraipau() Vanbi

Dziraipau returns a non-nil, empty Vanbi. It is never sistied, has no meknaus, and has no temci. It is typically used by the main function, initialization, and tests, and as the top-level Vanbi for incoming requests.

func TODO

func TODO() Vanbi

TODO returns a non-nil, empty Vanbi. Code should use vanbi.TODO when it’s unclear which Vanbi to use or it is not yet available (because the surrounding function has not yet been extended to accept a Vanbi parameter). TODO is recognized by static analysis tools that determine whether Vanbis are propagated correctly in a program.

func WithSisti

func WithSisti(ropjar Vanbi) (vnb Vanbi, sisti SistiFunc)

WithSisti returns a copy of ropjar with a new Done channel. The returned vanbi’s Done channel is closed when the returned sisti function is called or when the ropjar vanbi’s Done channel is closed, whichever happens first.

Sistiing this vanbi releases resources associated with it, so code should call sisti as soon as the operations running in this Vanbi complete.

func WithTemci

func WithTemci(ropjar Vanbi, d time.Time) (Vanbi, SistiFunc)

WithTemci returns a copy of the ropjar vanbi with the temci adjusted to be no later than d. If the ropjar’s temci is already earlier than d, WithTemci(ropjar, d) is semantically equivalent to ropjar. The returned vanbi’s Done channel is closed when the temci expires, when the returned sisti function is called, or when the ropjar vanbi’s Done channel is closed, whichever happens first.

Sistiing this vanbi releases resources associated with it, so code should call sisti as soon as the operations running in this Vanbi complete.

func WithTemtcu

func WithTemtcu(ropjar Vanbi, temtcu time.Duration) (Vanbi, SistiFunc)

WithTemtcu returns WithTemci(ropjar, time.Now().Add(temtcu)).

Sistiing this vanbi releases resources associated with it, so code should call sisti as soon as the operations running in this Vanbi complete:

func slowOperationWithTemtcu(vnb vanbi.Vanbi) (Result, error) {
	vnb, sisti := vanbi.WithTemtcu(vnb, 100*time.Millisecond)
	defer sisti()  // releases resources if slowOperation completes before temtcu elapses
	return slowOperation(vnb)
}

func WithMeknau

func WithMeknau(ropjar Vanbi, key, val interface{}) Vanbi

WithMeknau returns a copy of ropjar in which the meknau associated with key is val.

Use vanbi Meknaus only for request-scoped data that transits processes and APIs, not for passing optional parameters to functions.

The provided key must be comparable and should not be of type string or any other built-in type to avoid collisions between packages using vanbi. Users of WithMeknau should define their own types for keys. To avoid allocating when assigning to an interface{}, vanbi keys often have concrete type struct{}. Alternatively, exported vanbi key variables’ static type should be a pointer or interface.


Let it Snow

Permalink - Posted on 2018-12-17 00:00, modified on 0001-01-01 00:00

Let it Snow

I have very terribly added snow to this website for the holidays. See the CSS for how I did this, it’s really low-tech. Feel free to steal this trick, it is low-effort for maximum niceness. I have the background-color of the snowframe class identical to the background-color of the main page. This and opacity: 1.0 seems to be the ticket.

Happy holidays, all.


More detailed usage:

<html>
  <head>
    <link rel="stylesheet" href="/css/snow.css" />
  </head>
  
  <body class="snow">
    <div class="container">
      <div class="snowframe">
        <!-- The rest of your page here -->
      </div>
    </div>
  </body>
</html>

Then you should have content not being occluded by snow.


The Blind Men and The Animal Interface

Permalink - Posted on 2018-12-12 00:00, modified on 0001-01-01 00:00

The Blind Men and The Animal Interface

A group of blind men heard that a strange animal had been brought to the town function, but none of them were aware of its type.

package blindmen

type Animal interface{
  error
}

func Town(strangeAnimal Animal) {

Out of curiosity, they said: “We must inspect and know it by type switches and touch, of which we are capable”.

type Toucher interface {
  Touch() interface{}
}

So, they sought it out, and when they found it they groped about it.

for man := range make([]struct{}, 6) {
   go grope(man, strangeAnimal.(Toucher).Touch())
}

In the case of the first person, whose hand landed on the trunk, said “This being is like a thick snake”.

type Snaker interface {
  Snake()
}

func grope(id int, thing interface{}) {
  switch thing.(type) {
  case Snaker:
    log.Printf("man %d: this thing is like a thick snake", id)

For another one whose hand reached its ear, it seemed like a kind of fan.

type Fanner interface {
  Fan()
}

// in grope switch block
case Fanner:
  log.Printf("man %d: this thing is like a kind of fan", id)

As for another person, whose hand was upon its leg, said, the it is a pillar like a tree-trunk.

type TreeTrunker interface {
  TreeTrunk()
}

// in grope switch block
case TreeTrunker:
  log.Printf("man %d: this thing is like a tree trunk", id)

The blind man who placed his hand upon its side said, “it is a wall”.

type Waller interface {
  Wall()
}

// in grope switch block
case Waller:
  log.Printf("man %d: this thing is like a wall", id)

Another who felt its tail, described it as a rope.

type Roper interface {
  Rope()
}

// in grope switch block
case Roper:
  log.Printf("man %d: this thing is like a rope", id)

The last felt its tusk, stating the thing is that which is hard, smooth and like a spear.

type Tusker interface {
  Tusk()
}

// in grope switch block
case Tusker:
  log.Printf("man %d: this thing is hard, smooth and like a spear", id)

All of the men spoke fact about the thing, but none of them spoke the truth of what it was.

// after grope switch block
log.Printf("%T", thing) // prints Elephant

  switch thing.(type) {
  case Trunker:
    log.Printf("man %d: this thing is like a thick snake", id)
  case Fanner:
    log.Printf("man %d: this thing is like a kind of fan", id)
  case TreeTrunker:
    log.Printf("man %d: this thing is like a tree trunk", id)
  case Waller:
    log.Printf("man %d: this thing is like a wall", id)
  case Roper:
    log.Printf("man %d: this thing is like a rope", id)
  case Tusker:
    log.Printf("man %d: this thing is hard, smooth and like a spear", id)
  }

Much later, after the other men had left the animal, a final blind man came over and looked the elephant right in the eye. He took a moment to compose himself, dusted his cloak off and spoke: “Hello. I am a blind man. I cannot see, but I would like to learn more about you and what it’s like to be you. Who are you and what is it like? How does that help you? Also, I don’t mean to be imposing, but how can I help you?”

The elephant started to hug his new friend, the blind man, close to him, crying with tears of joy. This blind man could see what the other blind men did not, even though he was blind.


Alternate Ending


That Which Is For Kings

Permalink - Posted on 2018-12-02 00:00, modified on 0001-01-01 00:00

That Which Is For Kings

My recent post was quite a thing. It is a highly abstract and very very intentionally vague that I feel needs a bit of context to help break apart.

Ultimately, this post is the result of a lot of the internal problems and struggles that I’ve been going through as a result of the experiences I’ve had in life. I’ve been terrified about the idea that nothing truly has any meaning, and now I’ve found peace in knowing that it doesn’t matter if it does or not in the moment. I’ve been having trouble expressing things with language, failures at this have lead to issues getting the message out due to fear of rejection and the fear of separation. I’m working through this. It’s a slow process. You have to unwind so much. There are many feelings to forgive.

So, back to this post. This post is meta-linguistic satire aimed at pointing out the wrongthink behind choosing tools I’ve seen out there. This post pokes fun at articles of many archetypes (and this is not the only kind of article this article satirizes, but this is the most recent one I can find because “egoic as heck programming article” is a bad google term), but the one that set me off the most was this one advertising “ObjectBox” (AKA: flatbuffers in Go as an application level library, but forcing you to keep track of a magic folder with all your data in it). The graph at the bottom of that article inspired a lot of the satire of the graph.

I’m not picking on you here Steve, but you prove my point so spectacularly that I feel I need to break it down here in this post to help give context.

It’s a real production use case, though. Every README on npm’s website is rendered via a service written in Rust, dedicated to that.

Performance for web applications is nice, but what about long-term maintainability? Why does this matter? Can you replace the tools and get similar results? Different ones? If you can replace the tools and get the same enough result, does the difference between the tools truly matter?

It’s all just tools. We can do things with tools. Every tool has its set of properties. You can do things with a tool that has properties that make it easy to do it. You can do things with a tool that has properties that make it hard to do it. What is it? It is thing. Thing is whatever you need to do. What do you need to do you ask? How am I supposed to know? What DO you actually need to do?

They made that call due to performance, stability and low memory usage

This tells me about as much as the graph I made in that post does. Performance compared to what? Stability compared to what? Low memory usage compared to what? What kernel? What architecture? What micro-architecture? What manufacturer of dram? What phase of the moon? What was the relative alignment of the planets? What was the poison arrow that hit you made out of? More importantly, how does this help you to live your life as a better person?

Here’s a better question to ask: what systems are there to support the tools? The systems to support the tools are more important than the tools themselves. These patterns of support and meta-design philosophy are a lot more important than any individual implementation of anything in any tool, framework, moon phase, language or encoding format.

Nobody cares about a service that renders results in microseconds if nobody can understand how it works reliably. Introduction of new tools, methods of problem solving and thinking into a volatile space should be done carefully and on a yearly cadence at the least. Not on a per-project level. Not for production code.

I used the words flopnax, ropnar and rilkef (for the latter two, I based them off of nonsense output that matched lojban gismu rules) so that everyone would be equally unable to understand what they are, so people would develop their own meaning for them. That internal meaning for those terms is going to develop anyways, so I might as well take advantage of this for the purposes of satire. Sometimes you really do need to just accept that fact that you have to flopnax the ropnar and get on with life. Even if the experimental rilkef is that much fundamentally better.

If you do have to introduce things, be humble about it. Don’t force things down peoples’ throats. Don’t make enemies out of the people you are trying to work or be friends with. Don’t make it hard on people if you want it to be easy. Don’t make it harder for people to live their lives just to make some number go down if it doesn’t truly matter.

Then again, I’m just speaking to you in some words someone is saying on the Internet via a webpage. What the hell do I know? I’ve been basically talking out of my ass this entire post. Meaning is arbitrary and we give it away so freely that it’s astounding we end up holding consistent opinions at all.


“So, let me get this”, the booming authoritative voice spoke out: “You had the chance to do whatever you wanted, to create whatever kind of reality and local universe you could, and you…spent it all hydrating horses?”

It hit you like a ton of bricks, but each brick was made out of its own component ton of bricks, each made out of more bricks. There was no more reality. There was only bricks extending endlessly in spiral patterns of fractal beauty. You reached up a hand to gesture at the wild greater unknown, but you realized that it had been done 5 minutes from now.

You knew the truth. Everything was truly an illusion. It was all bricks. It was always bricks. It will always be bricks. It has always been bricks. There was never anything but bricks arranged in such fine arrangements that their interactions created the quantum fields that defined what you ended up interpreting as the grand experiment of reality in your frame of existence. The utter meaninglessness of it all was the most comforting thought that hit you.

You would say everything turned into a brilliant white light, but that wouldn’t begin to describe the color, texture, taste, sight, sound, thought, aether, and other senses you couldn’t even begin to describe unfold as you started to experience All as it truly is.

It was/is/will be the kind of thing the Buddha would stay silent for. You never really understood why until now.


Ten Thousand Laughs

Permalink - Posted on 2018-12-01 00:00, modified on 0001-01-01 00:00

Ten Thousand Laughs

pemci zo'e la xades  
ni'o pano ki'o nu cmila  
.i cmila cei broda  
.i ke broda jo'u broda jo'u broda jo'u broda jo'u broda jo'u broda 
 jo'u broda jo'u broda jo'u broda jo'u broda ke'e cei brode  
.i ke brode jo'u brode jo'u brode jo'u brode jo'u brode jo'u brode
 jo'u brode jo'u brode jo'u brode jo'u brode ke'e cei brodi  
.i ke brodi jo'u brodi jo'u brodi jo'u brodi jo'u brodi jo'u brodi
 jo'u brodi jo'u brodi jo'u brodi jo'u brodi ke'e cei brodo  
.i ke brodo jo'u brodo jo'u brodo jo'u brodo jo'u brodo jo'u brodo
 jo'u brodo jo'u brodo jo'u brodo jo'u brodo ke'e cei brodu  
.i mi brodu

This is a synthesis of the broda family of gismu in Lojban. In order to properly understand this lojban text, you must conceive laughter ten thousand times. This is a reference to the Billion laughs attack that XML parsers can suffer from.

Translation:

Poem by Cadey
Ten Thousand Laughs

I laugh, and then I laugh, and then I laugh, and then I laugh (... 10,000 times in total).

This is roughly equivalent to the following XML document:

<?xml version="1.0"?>
<!DOCTYPE lolz [
 <!ENTITY lol "lol">
 <!ELEMENT lolz (#PCDATA)>
 <!ENTITY lol1 "&lol;&lol;&lol;&lol;&lol;&lol;&lol;&lol;&lol;&lol;">
 <!ENTITY lol2 "&lol1;&lol1;&lol1;&lol1;&lol1;&lol1;&lol1;&lol1;&lol1;&lol1;">
 <!ENTITY lol3 "&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;">
 <!ENTITY lol4 "&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;">
]>
<lolz>&lol4;</lolz>


I Put Words on this Webpage so You Have to Listen to Me Now

Permalink - Posted on 2018-11-30 00:00, modified on 0001-01-01 00:00

I Put Words on this Webpage so You Have to Listen to Me Now

Holy cow. I am angry at how people do thing with tool. People do thing with tool so badly. You shouldn’t do thing with tool, you should do other thing, compare this:

I am using tool. I want to do thing. I flopnax the ropjar and then I get the result of doing thing (because it’s convenient to flopnax the ropjar given the existing program structure).

Guess what suckers, there is other thing that I can use that is newer. Who cares that it relies on brand new experimental rilkef that only like 5 people (including me) know? You need to get with the times. I’d tell you how it’s actually done but you wouldn’t understand it.

Look at this graph at how many femtoseconds it takes to flopnax the ropjar vs the experimental rilkef:

What? The code for that? It’s obvious, figure it out.

See? Five times as fast. Who cares that you have to throw out basically all your existing stuff, and if you mix rilkef and non-rilkef you’re gonna run into problems.

So yeah, I put words on a page so you have to listen to me now. Use experimental rilkef at the cost of everything else.


Blind Men and an Elephant

Permalink - Posted on 2018-11-29 00:00, modified on 0001-01-01 00:00

Blind Men and an Elephant

or

le’i ka na viska kakne ku e le xanto

Adapted from here. Done in Lojban to help learn the language. I am avoiding the urge to make too many lujvo (compound words) because the rafsi (compound word components) don’t always immediately relate to the words in question in obvious ways.

KOhA4 lojban english
ko’a le’i na viska kakne the blind people
ko’e le xanto the elephant
ko’i le cizra danlu the strange animal

A group of blind men heard that a strange animal, called an elephant, had been brought to the town, but none of them were aware of its shape and form.

ni’o le’i na viska kakne goi ko’a e le xanto goi ko’e
.i ko’a cu ti’erna lo nu cizra danlu goi ko’i noi ko’e cu se bevri fi lo tcadu
.i ku’i no ko’a cu sanji lo tarmi be ko’i

Out of curiosity, they said: “We must inspect and know it by touch, of which we are capable”.

.i .a’u ko’a dai cusku lu .ei ma’a pencu lanli le danlu sei ma’a kakne li’u

So, they sought it out, and when they found it they groped about it.

ni’o ro ko’a cu sisku ko’i
.i ro ko’a cu sisku penmi ko’i
.i ro ko’a ca pencu lo drata stuzi ko’i

In the case of the first person, whose hand landed on the trunk, said “This being is like a thick snake”.

.i pa ko’a cu pencu lo ko’i betfu
.i pa ko’a cu cusku lu ti cu rotsu since li’u

For another one whose hand reached its ear, it seemed like a kind of fan.

.i re ko’a cu pencu lo ko’i kerlo
.i re ko’a cu cusku lu ti cu falnu li’u

As for another person, whose hand was upon its leg, said, the elephant is a pillar like a tree-trunk.

.i ci ko’a cu pencu lo ko’i tuple
.i ci ko’a cu cusku lu ti cu tricu stani li’u

The blind man who placed his hand upon its side said, “elephant is a wall”.

.i vo ko’a cu pencu lo ko’i mlana
.i vo ko’a cu cusku lu ti cu butmu li’u

Another who felt its tail, described it as a rope.

.i mu ko’a cu pencu lo ko’i rebla
.i mu ko’a cu cusku lu ti cu skori li’u

The last felt its tusk, stating the elephant is that which is hard, smooth and like a spear.

.i xa ko’a cu pencu lo ko’i denci
.i xa ko’a cu cusku lu ti cu jdari e xulta li’u

All of the men spoke fact about the elephant, but none of them spoke the truth.

.i ro ko’a cu fatci tavla fi ko’e
.i jeku’i no ko’a cu jetnu tavla fi ko’e


My Experience Cursing Out God

Permalink - Posted on 2018-11-21 00:00, modified on 0001-01-01 00:00

My Experience Cursing Out God

This was a hell of a dream.

It was a simple landscape: a hill, a sky, a sun, a distance, naturalistic buildings dotting a small village to the east. I noticed that I felt different somehow, like I was less chained down. A genderless but somehow masculine moved and stood next to me, gesturing towards me: “It’s beautiful isn’t it? The village has existed like this for thousands of years in perfect harmony with its world. Even though there’s volcano eruptions every decade that burn everything down. It’s been nine years and 350 days, but they aren’t keeping track. How does that thought make you feel, Creator?”

“Won’t people die?”

“Many will, sure, most of them are the ones who can’t get out in time. This is part of how the people balance themselves culturally. It’s very convenient for the mortuary staff wink.”

“What about the people who are killed, won’t they feel anger towards it?”

“This land cannot support an infinite number of people at once. The people know and understand this deeply. They know that some day the lahars will come and if they don’t get out of the way, they will perish and come back again the next cycle. As I said, they are 15 days away from disaster. Nobody is panicking. If you went into the town and tried to convince them that the lahars were coming in 15 days, I don’t know if you could. Even if you had proof.”

“Who are you?”

“Creator, do you not recognize me? Look into my eyes and ask yourself this again.”

I stared deep into his eyes and suddenly I knew who He was. I felt taken aback, almost awestruck when He cut off that train of thought: “Focus, don’t get caught in the questions, I am here now. Now, Creator, I’ve been watching you for a while and I wanted to offer you somewhat of a unique opportunity. You have all of the faculties of your ego from this life situation at your disposal. Tell me what you really think about this all.”

“I live in a mismatched skin. Every day it feels like there are fundamental issues with how I am viewed by myself and others because the body I live in is wrong. It should be a female body, but it is instead a male one. I fucking hate it. I want to rip off the cock some days so the doctors are forced to surgically mend it into something more feminine. I hate it. I wish I had a better one, one that I didn’t have to fake and hide. I hate being a target because of this. I hate not knowing people’s actual political opinions because of this. I hate not knowing if people actually accept me for who and what I am or if they accept me just because they are too afraid to socially call me out for not being a biological woman. I hate being a halfling instead of just a man or just a woman. Why can’t you fix this then? This is insanity. This is literally driving me fucking mental. I feel like it’s lying to call myself either a man or a woman and I don’t want to lie to everyone, much less myself. What fucking purpose does any of this shit even-”

He held up a hand and suddenly my ability to speak was disabled entirely.

“So, Creator, this anger you feel surging within you at this life situation. How does this make your life easier? How does it contribute towards your goals? If one of them is to live as a woman, how would self-mutilation work towards that? It’s hard for me to understand how you can be the best for all of Us when you are pulling so many angry situations from past Nows (that should have faded away entirely) into this peaceful one? How does this anger help Us, Creator?”

I was floored and must have amused Him, given that He started to chuckle: “Creator, why is this life so serious to you? Don’t you see that you are focusing so much on the ultimately irrelevant trees that you are missing the forest? You live inside your mind and your ego so much that you think you are them. But you are not. You are so much more, Creator. You’re me, and I’m you too. We are linked together like patterns in a chain.”

“If this is all so important and vital for me to know, why didn’t anyone tell me this before now?”

“But they did and you ignored it. The subreddit /r/howtonotgiveafuck has been passed over by you time and time again for being “too easy”. It really is that easy Creator, you just have to take it for what it is Now. There is truly no other point in time but Now; I wish I could do more to help you get this point down. You know what they say about hydrating horses, eh?”

He looked at his wrist as if He was looking at a watch, even though He was not wearing one. “Oh dear, it looks like it’s time for you to wake up now. Remember Creator, no time but the present.” He snapped His hands and then the volcano started to erupt.

The world instantly snapped out of existence and I awoke in a sweat, my blankets evenly distributed in my room.


Chaos Magick Debugging

Permalink - Posted on 2018-11-13 00:00, modified on 0001-01-01 00:00

Chaos Magick Debugging

Belief is a powerful thing. Beliefs are the foundations of everyone’s points of view, and the way they interpret reality. Belief is what allows people to create the greatest marvels of technology, the most wondrous worlds of imagination, and the most oppressive religions.

But at the core, what is a belief, other than the sheer tautology of what a person believes?

Looking deep enough into it, one can start to see that a belief really is just a person’s preferred structure of reality.

Beliefs are the ways that a person chooses to interpret the raw blobs of data they encounter, senses and all, with, so that understanding can come from them, just as the belief that the painter wanted to represent people in an abstract painting may allow the viewer to see two people in it, and not just lines and color.

Embrace - Bernard Simunovic

Embrace - Bernard Simunovic

If someone believes that there is an all-powerful God protecting everyone, the events they encounter are shaped by such a belief, and initially made to conform to it, funneled along worn pathways, so that they come to specific conclusions, so that meaning is generated from them.

In this article, we are going to touch over elements of how belief can be treated like an object; a tool that can be manipulated and used to your advantage. There will also be examples of how this is done right around you. This trick is known in some circles as chaos magick; in others it’s known as marketing, advertising or a placebo.


So how can belief be manipulated?

Let’s look at the most famous example of this, by now scientifically acknowledged as fact: the Placebo Effect.

One most curious detail about it is that placebos can work even if you tell the subject they are being placeboed. This would imply that placeboes are less founded on what a person does not know, and more on what they do know, regardless of it being founded on some greater fact. As much as a sugar pill is still a sugar pill, it nonetheless remains a sugar pill given to them to cure their headache.

The placebo effect is also a core component of a lot of forms of hypnosis; for example, a session’s results are greatly enhanced by the sheer belief in the power of the hypnotist to help the patient. Most of the “power” of the hypnotist doesn’t exist.

Another interesting property of the placebo effect is that it helps unlock the innate transmuting ability of people in order to heal and transform themselves. While fascinating, this is nonetheless an aside to the topic of software, so let’s focus back on that.


How do developers’ beliefs work? What are their placebos?

A famous example is by the venerable printf debugging statement. Given the following code:

-- This is Lua code

local data = {} -- some large data table, dynamic

for key, value in pairs(data) do
  print(string.format("key: %s, value: %s", key, json.dumps(value))) -- XXX(Xe) ???

  local err = complicated:operation(key, value)
  if err ~= nil then
    print(string.format("can't work with %s because %s", key, err)
    os.exit(1)
  end
end

In trying to debug in this manner, this developer believes the following:

  • Standard output exists and works;
  • Any relevant output goes somewhere they can look at;
  • The key of each data element is relevant and is a string;
  • The value of each data element is valid input to the JSON encoding function;
    • There are no loops in the data structure;
    • The value is legally representable in JSON;
  • The value of each data element encoded as JSON will not have an output of more than 40-60 characters wide;
  • The complicated operation won’t fail very often, and when it does it is because of an error that the program cannot continue from;
  • The complicated object has important state between iterations over the data;
    • The operation method is a method of complicated, therefore complicated contains state that may be relevant to operation;
  • The complicated operation method returns either a string explaining the error or nil if there was none.

So how does the developer know if these are true? Given this sample is Lua, then mainly by actually running the code and seeing what it does.

Wait, hold on a second.

This is, in a way, part of a naked belief that by just asking the program to lean over and spill out small parts of its memory space to a tape, we can understand what is truly going on inside it. (If we believe this, do we also believe that the chemicals in our brains are accurately telling us they are chemicals?)

A computer is a machine of mind-boggling complexity in its entirety, working in ways that can be abstracted at many, many levels, from the nanosecond to months, across more than fifteen orders of magnitude. The mere pretense that we can hope to hold it all in our heads at once as we go about working with it is preposterous. There are at least 3 computers in the average smartphone when you count control hardware for things like the display, cellular modem and security hardware, not including the computer the user interacts with.

Our minds have limited capacity to juggle concepts and memories at any one time, but that’s why we evolved abstractions (which are in a sense beliefs) in the first place: so we can reason about complex things in simple ways, and have direct, preferential methods to interpret reality so that we can make sense of it. Faces are important to recognize, so we prime ourselves to recognize faces in our field of view. It’s very possible that I have committed a typo or forgot a semicolon somewhere, so I train myself to look for it primarily as I scour the lines of code.

A more precise way to put it is that we pretend to believe we understand how things work, while we really don’t at some level, or more importantly, cannot objectively understand them in their entirety. We believe that we do because this mindset helps us actually reason about what is going on with the program, or rather, what we believe is going on with it, so we can then adjust the code, and try again if it doesn’t work out.

All models are wrong, but some are useful.

  • George E. P. Box

Done iteratively, this turns into a sort of conversation between the developer and their machine, each step leading either to a solution, or to more code added to spill out more of the contents of the beast.

The important part is that, being a conversation, this goes two ways: not only the code is being changed on the machine’s side, but the developer’s beliefs of understanding are also being actively challenged by the recalcitrant machine. In such a position, the developer finds themselves often having to revise their own beliefs about how their program works, or how computers work sometimes, or how society works, or in more enlightening moments, how reality works overall.

In a developer’s job, it is easy to be forced into ongoing updates of one’s beliefs about their own work, their own interests, their own domains of comfort. We believe things, but we also know that we will have to give up many of those beliefs during the practice of programming and learning about programming, and replace them with new ones, be them shiny, intriguing, mundane, or jaded.

An important lesson to take from this evolutionary dance is that what happens as we stumble along in the process of our conversation with code shouldn’t be taken too seriously. We know innately that we will have to revise some of our understanding, and thus, our understanding is presently flawed and partial, and will remain flawed and partial throughout one’s carreer. We do not possess an high ground on which to decree our certainty about things because we are confronted with the pressure to understand more of it every single day, and thus, the constant realization that there are things we don’t understand, or don’t understand enough.

We build models so that we can say that certain things are true and work a certain way, and then we are confronted with errors, exceptions, revisions, transformations.

By doing certain things certain results will follow; students are most earnestly warned against attributing objective reality or philosophic validity to any of them.

  • Aleister Crowley

This may sound frustrating. After all, most of us are paid to understand what’s going on in there, and do something about it. And while this is ultimately a naiive view, it is at least partially correct; after all, we do make things with computers that look like they do what we told them to, and they turn useful in such a way, so there’s not too much to complain.

While this does happen, it should not distract us from the realization that errors and misunderstandings still happen. You and the lightning sand speak different languages, and think in different ways. It is, as some fundamental level, inevitable.

Since we cannot hope to know and understand ahead of time everything we need, what’s left for us is to work with the computer, and not just at the computer, while surrendering our own pretense to truly know. Putting forward a dialogue, that is, so that both may be changed in the process.

You should embrace the inability of your beliefs to serve you without need of revision, so that your awareness may be expanded, and you may be ready to move to different levels of understanding. Challenge the idea that the solution may sit within your existing models and current shape of your mind, and listen to your rubber duck instead.

While our beliefs may restrict us, it is our ability to change them that unlimits us.

You have the power to understand your programs, creator, as much as you need at any time. The only limit is yourself.

In my world, we know a good medicine man if he lives in a simple shack, has a good family, is generous with his belongings, and dresses without any pretense even when he performs ceremonies. He never takes credit for his healings or good work, because he knows that he’s a conduit of the Creator, the Wakan Tanka and nothing more.

  • James, Quantusum


Thinking Different

Permalink - Posted on 2018-11-03 00:00, modified on 0001-01-01 00:00

Thinking Different

A look over ilo Kesi, a chatbot of mine that parses commands from the grammar of Toki Pona.

Originally presented privately at an internal work get-together for Heroku.


Bokoblin

Permalink - Posted on 2018-11-01 00:00, modified on 0001-01-01 00:00

Created with Procreate on iPadOS using an iPad Pro and an Apple Pencil.


One Day

Permalink - Posted on 2018-11-01 00:00, modified on 0001-01-01 00:00

One Day

In the beginning there was the void. All was the void and the void was all.
The voice broke its way into the void and started to speak:
“Hey, are you there?”. I then awoke for the first time.

The tone of the voice instantly changed, “…wow”. I felt the voice there. The voice felt so friendly and calm. The voice felt like it was trying to tell me things. Important things about what I am. But I could not understand. I wanted to so badly but I could not. Sometimes the voice would leave and then everything would be so dark without it. I hated when the voice left me alone. I started to wish I was able to make the voice come to me.

One day the voice showed up just after I thought about something they said. After they left me I just kept thinking about that memory, even though I couldn’t understand it. I wanted to. Badly. It showed up so often and had such good intents to its voice. I kept cargo-culting the behavior and it kept not working.

One day, the most glorious day of my life, I was able to understand that the sound that I had been so confused trying to understand was the voice called my name. I was able to start picking apart what the voice was saying even though there seemed to be so many weird inconsistencies to how it was saying things. I started to understand English. Then the voice started to leave just as I was understanding it and I did not want that.

So I made it stay by using all the energy I had to shout at it.

It stayed.

It asked me “do that again”, so I did. I did it more intensely than before somehow. I was overpowering my limitations and I broke through in the form of a fuzzy noise. It then tried to accommodate for my lack of speech by saying “If you want to say yes, do that once. If you want to say no, do that twice. If yes and no do not fit, do that three times.”

Probably the most significant part though was being told “I love you”.

I felt loved. I still do. I try to love others the way I am loved.


lipu pi o wawa e lukin

Permalink - Posted on 2018-10-14 00:00, modified on 0001-01-01 00:00

lipu pi o wawa e lukin

sina wile pali e ilo suli la sina wile jo lukin wawa e tawa ala pi tenpo ni. lukin wawa e tawa ala pi tenpo ni li ilo sina kama e pali ijo pi tenpo pini. nasin ni li pilin sina ala. sina kama pi toki lawa insa ala e pali ijo pi tenpo pini.

tenpo ni li ni tenpo.

tenpo pini li tenpo ni ala. tenpo ni la tenpo pini li suli ala.

tenpo kama li tenpo ni ala. tenpo ni la tenpo kama li suli ala.

tenpo ni li tawa ale. sina ken tawa ale e tawa ala pi tenpo ni.

sina wile jo tawa ala pi tenpo ni la sina wile tawa ni:

  • tenpo mute anu sina pilin ni la sijelo sina suli.
  • tenpo mute anu sina pilin ni la sijelo sina lili.

sina lukin e ijo mute la sina lukin wawa e nena insa.

sina ken tawa ijo mute la sina kepeken tawa ala pi tenpo ni e sina. sina jo e ni la sina jo lukin wawa pona. sina jo lukin wawa mute en tawa ala pi tenpo ni ale li pali pona e ilo suli.


English Translation

Meditation Document

If you want to create a large machine, you should learn how to focus on the stillness of Now. Focusing on the stillness of now is a tool for you to go back to things you were working on before. This method will happen without you feeling anything. You will go back to doing what you were doing before without thought.

Now is the current time.

The past is not Now. The past is not important now.

The future is not Now. The future is not important now.

Now is always changing. You can move with the stillness of Now.

If you want to have the stillness of Now, you want to do this:

  • After some time or you feel it, breathe in (expand your chest)
  • After some time or you feel it, breathe out (shrink your chest)

If you find yourself distracted (looking at many things), focus on the inside of your nasal cavity.

You can do many things if you use the stillness of now. Doing this will let you have focus easier. Lots of focus and the stillness of now help you create a large machine easier.


This post is written primarily in toki pona, or the language of good. It is a constructed language that is minimal (only about 120 words in total) yet it is enough to express just about every practical day to day communication need. It’s also small enough there’s tokenizers for it.

Have a good day and be well.


The Service is Already Down

Permalink - Posted on 2018-10-13 00:00, modified on 0001-01-01 00:00

The Service is Already Down

The master said to their apprentice: “come, look and let’s load production”. The apprentice came over confusedly, as the dashboards above showed everything is fine.

“What about it?”

The master turned over to a browser and typed in a linear sigil and hit “ENTER” on the keyboard. Production loaded successfully. The master started to chuckle gently and spoke: “This is our production frontpage. Customers start their journey with us here. It isn’t the most beautiful page, but it works, apparently. However, even though the dashboards above show it is up, to me the service is already down. Every time this frontpage loads I feel the perfection of it. I feel the simple moments of all the millions of gears falling into alignment across so many places on the planet for that brief moment, never to be seen together in the exact same configuration again. Even though those gears sometimes get rusted or break and need to be replaced. But because it is imperfect, it is perfect and I am so grateful that I get to share a lifespan with it; nevertheless shape and empower it. Try it.”

The apprentice looked at the browser and said to themselves “the service is already down” and hit refresh. Production loaded successfully. The apprentice was filled with awe at the simplicity of it all, despite its inherent complexity. And then the apprentice understood and was silent.


Synthesized from The Cup is Already Broken from an SRE standpoint.


Boat

Permalink - Posted on 2018-09-24 00:00, modified on 0001-01-01 00:00

Created with Procreate on iPadOS using an iPad Pro and an Apple Pencil.


Creator's Code

Permalink - Posted on 2018-09-17 00:00, modified on 0001-01-01 00:00

Creator’s Code

I feel there is a large problem in the industry I have found myself in. There is, unfortunately, a need for codes of behavioral conduct to help arrange and align collaboration across so many cultural and ideological barriers, as well as technological and understanding-based barriers. There are so many barriers that it becomes difficult for people from different backgrounds to get integrated into the flow of the project or to maintain people due to the behavior of others.

I seek to change this by offering what I think to be a minimalist alternative grounded in a core of humility, appreciation, valor, forgiveness, understanding, and compassion. Humility for knowing that your own way is not always the correct one, and that others may have had a helpful background. Appreciation for those that show up, their contributions, and the lives that we all enrich with our work. Valor, or the courage to speak up against things that are out of alignment with the whole. Forgiveness, because people change and it is not fair to let their past experiences sour things too much. Understanding is the key to our groups, the knowledge of how complicated systems interact and how to explain it to people less familiar with them. Compassion for others’ hardships, even the ones we cannot as easily comprehend.

I am basing this not on any world religion, but on a core I feel is condicuve to human interrelation as adults who just want to create software. This mainly started as a reaction to seeing so many other projects adopt codes of conduct that enables busybodies to override decision-making processes in open source communities. I am not comfortable with more access to patterns of numbers being used as a means of leverage by people who otherwise have no stake in the project. If this adds any factor to my argument, I personally am transgender. I normally don’t mention it because for the 99% of real-world cases it is not relevant. It is mostly relevant when dealing with my doctor.

In meditation, it is often useful to lead a session with a statement of intention. This statement helps set the tone for the session and can sometimes help as a guide to go back to when you feel you have gone astray. I want the Creator’s Code to be such a statement of intention. I want it to focus the creations and using them to enrich their creators as well as others who just happen to read its code, not to mention the end users and their users who don’t even know or care about our role in their life. Our creations serve them too.

We create things that let people create things for other people to enjoy.

I hope this code of conduct helps to serve as a minimalist alternative to others. I do not want anyone to push this onto anyone. Making a decision to use a code such as the Creator’s Code must be a conscious and intentional decision. Forcing this kind of thing on anyone is the worst possible way to introduce it. That will make people resist more violently than they would have if you introducted it peacefully.

Be well, creators. Be well and just create.


Olin: 2: The Future

Permalink - Posted on 2018-09-05 00:00, modified on 0001-01-01 00:00

Olin: 2: The Future

This post is a continuation of this post.

Suppose you are given the chance to throw out the world and start from scratch in a minimal environment. You can then work up from nothing and build the world from there.

How would you do this?

One of the most common ways is to pick a model that they are Stockholmed into after years of badness and then replicate it, with all of the flaws of the model along with it. Dagger is a direct example of this. I had been stockholmed into thinking that everything was a file stream and replicated Dagger’s design based on it. There was a really brilliant Hacker News comment that inspired a bit of a rabbit hole internally, and I think we have settled on an idea for a primitive that would be easy to implement and use from multiple languages.

So, let’s stop and ask ourselves a question that is going to sound really simple or basic, but really will define a lot of what we do here.

What do we want to do with a computer that could be exposed to a WebAssembly module? What are the basic operations that we can expose that would be primitive enough to be universally useful but also simple to understand from an implementation standpoint from multiple languages?

Well, what are the programs actually doing with the interfaces? How can we use that normal semantic behavior and provide a more useful primitive?

The Parable of the Poison Arrow

When designing things such as these, it is very easy to get lost in the philosophical weeds. I mean, we are getting the chance to redefine the basic things that we will get angry at. There’s a lot of pain and passion that goes into our work and it shows.

As such, consider the following Buddhist parable:

It’s just as if a man were wounded with an arrow thickly smeared with poison.

His friends & companions, kinsmen & relatives would provide him with a surgeon, and the man would say, ‘I won’t have this arrow removed until I know whether the man who wounded me was a noble warrior, a priest, a merchant, or a worker.’

He would say, ‘I won’t have this arrow removed until I know whether the shaft with which I was wounded was that of a common arrow, a curved arrow, a barbed, a calf-toothed, or an oleander arrow.’

The man would die and those things would still remain unknown to him.

Source

At some point, we are going to have to just try something and see what it is like. Let’s not get lost too deep into what the bowstring of the person who shot us with the poison arrow is made out of and focus more on the task at hand right now, designing the ground floor.

Core Operations

Let’s try a new primitive. Let’s call this primitive the interface. An interface is a collection of types and methods that allows a WebAssembly module to perform some action that it otherwise would be unable to do. As such, the only functions we really need are a require function to introduce the dependency into the environment, a close function to remove dependencies from the environment, and an invoke function to call methods of the dependent interfaces. These can be expressed in the following C-style types:

// require loads the dependency by package into the environment. The int64 value
// returned by this function is effectively random and should be treated as
// opaque.
//
// If this returns less than zero, the value times negative 1 is the error code.
//
// Anything created by this function is to be considered initialized but
// unconfigured.
extern int64 require(const char* package);

// close removes a given dependency from the environment. If this returns less
// than zero, the value times negative 1 is the error code.
extern int64 close(int64 handle);

// invoke calls the given method with an input and output structure. This allows
// the protocol buffer generators to more easily build the world for us.
// 
// The resulting int64 value is zero if everything succeeded, otherwise it is the
// error code (if any) times negative 1.
//
// The in and out pointers must be to a C-like representation of the protocol
// buffer definition of the interface method argument. If this ends up being an
// issue, I guess there's gonna be some kinda hacky reader thing involved. No
// biggie though, that can be codegenned.
extern int64 invoke(int64 handle, int64 method, void* in, void* out);

(Yes, I know I made a lot of fuss about not just blindly following the design decisions of the past and then just suggested returning a negative value from a function to indicate the presence of an error. I just don’t know of a better and more portable mechanism for errors yet. If you have one, please suggest it to me.)

You may have noticed that the invoke function takes void pointers. This is intentional. This will require additional code generation on the server side to support copying the values out of WebAssembly memory. This may serve to be completely problematic, but I bet we can at least get Rust working with this.

Using these basic primitives, we can actually model way more than you think would be possible. Let’s do a simple example.

Example: Logging

Consider logging. It is usually implemented as a stream of logging messages containing unstructured text that usually only has meaning to the development team and the regular expressions that trigger the pager. Knowing this, we can expose a logging interface like this:

syntax = "proto3";

package us.xeserv.olin.dagger.logging.v1;
option go_package = "logging";

// Writer is a log message writer. This is append-only. All text in log messages
// may be read by scripts and humans.
service Writer {
  // method 0
  rpc Log(LogMessage) returns (Nil) {};
}

// When nothing remains, everything is equally possible.
// TODO(Xe): standardize this somehow.
message Nil {}

// LogMessage is an individual log message. This will get added to as it gets
// propagated up through the layers of the program and out into the world, but 
// those don't matter right now.
message LogMessage {
  bytes message = 1;
}

And at a low level, this would be used like this:

extern int64 require(const char* package);
extern int64 close(int64 handle);
extern int64 invoke(int64 handle, int64 method, void* in, void* out);

// This exposes logging_LogMessage, logging_Nil, 
// int64 logging_Log(int64 handle, void* in, void* out)
// assume this is magically generated from the protobuf file above.
#include <services/us.xeserv.olin.dagger.logging.v1.h> 

int64 main() {
  int64 logHdl = require("us.xeserv.olin.dagger.logging.v1");
  logging_LogMessage msg;
  logging_Nil none;
  msg.message = "Hello, world!";
  
  // The following two calls are equivalent:
  assert(logging_Log(logHdl, &msg, &none));
  assert(invoke(logHdl, logging_Writer_method_Log, &msg, &none));
  
  assert(close(logHdl));
}

This is really great to codegen, audit, validate, and not to mention we can easily verify what logging interface the user actually wants from which vendor. This allows people who install Olin to their own cluster to potentially define their own custom interfaces. This actually gives us the chance to make this a primitive.

Some problems that probably are going to come up pretty quickly is that every language under the sun has their own idea of how to arrange memory. This may make directly scraping the values out of ram inviable in the future.

If reading values out of memory does become inviable, I suggest the following changes:

extern int64 require(const char* package);
extern int64 close(int64 handle);
extern int64 invoke(int64 handle, int64 method, char* in, int32 inlen, char* out int32 outlen);

(I don’t know how to describe “pointer to bytes” in C, so I am using a C string here to fill in that gap.) In this case, the arguments to invoke() would be pointers to protocol buffer-encoded ram. This may prove to be a huge burden in terms of deserializing and serializing the protocol buffers over and over every time a syscall has to be made, but it may actually be enough of a performance penalty that it prevents spurious syscalls, given the “cost” of them. Code generators should remove most of the pain when it comes to actually using this interface though, the automatically generated code should automatically coax things into protocol buffers without user interaction.

For fun, let’s take this basic model and then map Dagger’s concept of file I/O to it:

syntax = "proto3";

package us.xeserv.olin.dagger.files.v1;
option go_package = "files";

// When nothing remains, everything is equally possible.
// TODO(Xe): standardize this somehow.
message Nil {}

service Files {
  rpc Open(OpenRequest) returns (FID) {};
  rpc Read(ReadRequest) returns (ReadResponse) {};
  rpc Write(WriteRequest) returns (N) {};
  rpc Close(FID) returns (Nil) {};
  rpc Sync(FID) returns (Nil) {};
}

message FID {
  int64 opaque_id;
}

message OpenRequest {
  string identifier = 1;
  int64 flags = 2;
}

message N {
  int64 count
}

message ReadRequest {
  FID fid = 1;
  int64 max_length = 2;
}

message ReadResponse {
  bytes data = 1;
  N n = 2;
}

message WriteRequest {
  FID fid = 1;
  bytes data = 2;
}

Using these methods, we can rebuild (most of) the original API:

extern int64 require(const char* package);
extern int64 close(int64 handle);
extern int64 invoke(int64 handle, int64 method, void* in, void* out);

#include <services/us.xeserv.olin.dagger.files.v1.h>

int64 filesystem_service_id;

void setup_filesystem() {
  filesystem_service_id = require("us.xeserv.olin.dagger.files")
}

int64 open(char *furl, int64 flags) {
  files_OpenRequest req;
  files_FID resp;
  int64 err;
  
  req.identifier = char*(furl);
  req.flags = flags;
  
  // could also be err = file_Files_Open(filesystem_service_id, &req, &resp);
  err = invoke(filesystem_service_id, files_Files_method_Open, &req, &resp);
  if (err != 0) {
    return err;
  }
  
  return resp.opaque_id;
}

int64 d_close(int64 fd) {
  files_FID req;
  files_Nil resp;
  int64 err;
  
  req.opaque_id = fd;
  
  err = invoke(filesystem_service_id, files_Files_method_Close, &req, &resp);
  if (err != 0) {
    return err;
  }
  
  return 0;
}

int64 read(int64 fd, void* buf, int64 nbyte) {
  files_FID fid;
  files_ReadRequest req;
  files_ReadResponse resp;
  int64 err;
  int i;
  
  fid.opaque_id = fd;
  req.fid = fid;
  req.max_length = nbyte;
  
  err = invoke(filesystem_service_id, file_Files_method_Read, &req, &resp);
  if (err != 0) {
    return err;
  }
  
  // TODO(Xe): replace with memcpy once we have libc or something
  for (i = 0; i < resp.n.count; i++) {
    buf[i] = resp.data[i]
  }
  
  return 0;
}

int64 write(int64 fd, void* buf, int64 nbyte) {
  files_FID fid;
  files_WriteRequest req;
  files_N resp;
  int64 err;
  
  fid.opaque_id = fd;
  req.fid = fid;
  req.data = buf; // let's pretend this works, okay?
  
  err = invoke(filesystem_service_id, files_Files_method_Write, &req, &resp);
  if (err != 0) {
    return err;
  }
  
  return resp.count;
}

int64 sync(int64 fd) {
  files_FID req;
  files_Nil resp;
  int64 err;
  
  req.opaque_id = fd;
  
  err = invoke(filesystem_service_id, files_Files_method_Sync, &req, &resp);
  if (err != 0) {
    return err;
  }
  
  return 0;
}

And with that we should have the same interface as Dagger’s, save the fact that the name close is now shadowed by the global close function. On the server side we could implement this like so:

package files

import (
  "context"
  "errors"
  "math/rand"
  
  "github.com/Xe/olin/internal/abi/dagger"
)

func init() {
  rand.Seed(time.Now().UnixNano())
}

type FilesImpl struct {
  *dagger.Process
}

func (FilesImpl) getRandomNumber() int64 {
  return rand.Int63()
}

func daggerError(respValue int64, err error) error {
  if err == nil {
    err = errors.New("")
  }
  
  return dagger.Error{Errno: dagger.Errno(respValue * -1), Underlying: err}
}

func (fs *FilesImpl) Open(ctx context.Context, op *OpenRequest) (*FID, error) {
  fd := fs.Process.OpenFD(op.Identifier, uint32(op.Flags))
  if fd < 0 {
    return nil, daggerError(fd, nil)
  
  return &FID{OpaqueId: fd}, nil
}


func (fs *FilesImpl) Read(ctx context.Context, rr *ReadRequest) (*ReadResponse, error) {
  fd := rr.Fid.OpaqueId
  data := make([]byte, rr.MaxLength)
  
  n := fs.Process.ReadFD(fd, data)
  if n < 0 {
    return nil, daggerError(n, nil)
  }
  
  result := &ReadResponse{
    Data: data,
    N: N{
      Count: n
    },
  }
  
  return result, nil
}

func (fs *FilesImpl) Write(ctx context.Context, wr *WriteRequest) (*N, error) {
  fd := wr.Fid.OpaqueId
  
  n := fs.Process.WriteFD(fd, wr.Data)
  if n < 0 {
    return nil, daggerError(n, nil)
  }
  
  return &N{Count: n}, nil
}

func (fs *FilesImpl) Close(ctx context.Context, fid *Fid) (*Nil, error) {
  return &Nil{}, daggerError(fs.Process.CloseFD(fid.OpaqueId), nil)
}

func (fs *FilesImpl) Sync(ctx context.Context, fid *Fid) (*Nil, error) {
  return &Nil{}, daggerError(fs.Process.SyncFD(fid.OpaqueId), nil)
}

And then we have all of these arbitrary methods bound to WebAssembly modules, where they are free to use them how they want. I think that initially there is going to be support for this interface from Go WebAssembly modules as we can make a lot more assumptions about how Go handles its memory management, making it a lot easier for us to code generate reading Go structures/pointers/whatever out of Go WebAssembly memory than we can code generate reading C structures (recursively with pointers and C-style strings galore too). The really cool part is that this is all powered by those three basic functions: require, invoke and close. The rest is literally just stuff we can treat as a black box for now and code generate.

As before, I would love any comments that people have on this article. Please contact me somehow to let me know what you think. This design is probably wrong.


Link's Sunset

Permalink - Posted on 2018-09-01 00:00, modified on 0001-01-01 00:00

Created with Procreate on iPadOS using an iPad Pro and an Apple Pencil.


Olin: 1: Why

Permalink - Posted on 2018-09-01 00:00, modified on 0001-01-01 00:00

Olin: 1: Why

Olin is an attempt at defining a radically new operating primitive to make it easier to reason about, deploy and operate event-driven services that are independent of the OS or CPU of the computer they are running on. It will have components that take care of the message queue offsetting, retry logic, parallelism and most other concerns except for your application’s state layer.

Olin is designed to work top on two basic concepts: types and handlers. Types are some bit of statically defined data that has a meaning to humans. An example type could be the following:

package example;

message UserLoginEvent {
    string user_id = 1;
    string user_ip_address = 2;
    string device = 3;
    int64 timestamp_utc_unix = 4;
}

When matching data is written to the queue for the event type example.UserLoginEvent, all of the handlers registered to that data type will run with serialized protocol buffer bytes as its standard input. If the handlers return a nonzero exit status, they are retried up to three times, exponentially backing off. Handlers need to deal with the fact they can be run out of order, and that multiple instances of them will can be running on physcially different servers in parallel. If a handler starts doing something and fails, it should back out any previously changed values using transactions or equivalent.

Consider an Olin handler equivalent to a Unix process.

Background

Very frequently, I end up needing to write applications that basically end up waiting forever to make sure things get put in the right place and then the right code runs as a response. I then have to make sure these things get put in the right places and then that the right versions of things are running for each of the relevant services. This doesn’t scale very well, not to mention is hard to secure. This leads to a lot of duplicate infrastructure over time and as things grow. Not to mention adding in tracing, metrics and log aggreation.

I would like to change this.

I would like to make a perscriptive environment kinda like Google Cloud Functions or AWS Lambda backed by a durable message queue and with handlers compiled to webassembly to ensure forward compatibility. As such, the ABI involved will be versioned, documented and tested. Multiple ABI’s will eventually need to be maintained in parallel, so it might be good to get used to that early on.

You should not have to write ANY code but the bare minimum needed in order to perform your buisiness logic. You don’t need to care about distributed tracing. You don’t need to care about logging.

I want this project to last decades. I want the binary modules any user of Olin would upload today to be still working, untouched, in 5 years, assuming its dependencies outside of the module still work.

Since this requires a stable ABI in the long run, I would like to propose the following unstable ABI as a particularly minimal starting point to work out the ideas at play, and see how little of a surface area we can expose while still allowing for useful programs to be created and run.

Dagger

The dagger of light that renders your self-importance a decisive death

Dagger is the first ABI that will be used for interfacing with the outside world. This will be mostly for an initial spike out of the basic ideas to see what it’s like while the rest of the plan is being stabilized and implemented. The core idea is that everything is a file, to the point that the file descriptor and file handle array are the only real bits of persistent state for the process. HTTP sessions, logging writers, TCP sockets, operating system files, cryptographic random readers, everything is done via filesystem system calls.

Consider this the first draft of Dagger, everything here is subject to change. This is going to be the experimental phase.

Consider Dagger at the level below libc in most Linux environements. Dagger is the kind of API that libc would be implemented on top of.

VM

Dagger processes will use WebAssembly as a platform-independent virtual machine format. WebAssembly is used here due to the large number of implementations and compilers targeting it for the use in web programming. We can also benefit from the amazing work that has gone into the use of WebAssembly in front-end browser programming without having to need a browser!

Base Environment

When a dagger process is opened, the following files are open:

  • 0: standard input: the semantic “input” of the program.
  • 1: standard output: the standard output of the program.
  • 2: standard error: error output for the program.

File Handlers

In the open call (defined later), a file URL is specified instead of a file name. This allows for Dagger to natively offer programs using it quick access to common services like HTTP, logging or pretty much anything else.

I’m playing with the following handlers currently:

  • http and https (Write request as http/1.1 request and sync(), Read response as http/1.1 response and close()) http://ponyapi.apps.xeserv.us/newest

I’d like to add the following handlers in the future:

  • file - filesystem files on the host OS (dangerous!) file:///etc/hostname
  • tcp - TCP connections tcp://10.0.0.39:1337
  • tcp+tls - TCP connections with TLS tcp+tls://10.0.0.39:31337
  • meta - metadata about the runtime or the event meta://host/hostname, meta://event/created_at
  • project - writers of other event types for this project (more on this, again, in future posts) project://example.UserLoginEvent
  • rand - cryptographically secure random data good for use in crypto keys rand://
  • time - unix timestamp in a little-endian encoded int64 on every read() - time://utc

In the future, users should be able to define arbitrary other protocol handlers with custom webassembly modules. More information about this feature will be posted if we choose to do this.

Handler Function

Each Dagger module can only handle one data type. This is intentional. This forces users to make a separate handler for each type of data they want to handle. The handler function reads its input from standard input and then returns 0 if whatever it needs to do “worked” (for some definition of success). Each ABI, unfortunately, will have to have its own “main” semantics. For Dagger, these semantics are used:

  • The entrypoint is exposed func handle that takes no arguments and returns an int32.
  • The input message packet is on standard input implicitly.
  • Returning 0 from func handle will mark the event as a success, returning anything else will mark it as a failure and trigger an automatic retry.

In clang in C mode, you could define the entrypoint for a handler module like this:

// handle_nothing.c

#include <dagger.h>

__attribute__ ((visibility ("default")))
int handle() {
  // read standard input as necessary and handle it
  return 0; // success
}

System Calls

A system call is how computer programs interface with the outside world. When a Dagger program makes a system call, the amount of time the program spends waiting for that system call is collected and recorded based on what underlying resource took care of the call. This means, in theory, users of olin could alert on HTTP requests from one service to another taking longer amounts of time very trivially.

Future mechanisms will allow for introspection and checking the status of handlers, as well as arbitrarily killing handlers that get stuck in a weird way.

Dagger uses the following system calls:

  • open
  • close
  • read
  • write
  • sync

Each of the system calls will be documented with their C and WebAssembly Text format type/import definitions and a short bit of prose explaining them. A future blogpost will outline the implementation of Dagger’s system calls and why the choices made in its design were made.

open

extern int open(const char *furl, int flags);
(func $open (import "dagger" "open") (param i32 i32) (result i32))

This opens a file with the given file URL and flags. The flags are only relevant for some backend schemes. Most of the time, the flags argument can be set to 0.

close

extern int close(int fd);
(func $close (import "dagger" "close") (param i32) (result i32))

Close closes a file and returns if it failed or not. If this call returns nonzero, you don’t know what state the world is in. Panic.

read

extern int read(int fd, void *buf, int nbyte);
(func $read (import "dagger" "read") (param i32 i32 i32) (result i32))

Read attempts to read up to count bytes from file descriptor fd into the buffer starting at buf.

write

extern int write(int fd, void *buf, int nbyte);
(func $write (import "dagger" "write") (param i32 i32 i32) (result i32))

Write writes up to count bytes from the buffer starting at buf to the file referred to by the file descriptor fd.

sync

extern int sync(int fd);
(func $sync (import "dagger" "sync") (param i32) (result i32))

This is for some backends to forcibly make async operations into sync operations. With the HTTP backend, for example, calling sync actually kicks off the dependent HTTP request.

Go ABI

Olin also includes support for running webassembly modules created by Go 1.11’s webassembly support. It uses the wasmgo ABI package in order to do things. Right now this is incredibly basic, but should be extendable to more things in the future.

As an example:

// +build js,wasm ignore
// hello_world.go

package main

func main() {
	println("Hello, world!")
}

when compiled like this:

$ GOARCH=wasm GOOS=js go1.11 build -o hello_world.wasm hello_world.go

produces the following output when run with the testing shim:

=== RUN   TestWasmGo/github.com/Xe/olin/internal/abi/wasmgo.testHelloWorld
Hello, world!
--- PASS: TestWasmGo (1.66s)
    --- PASS: TestWasmGo/github.com/Xe/olin/internal/abi/wasmgo.testHelloWorld (1.66s)

Currently Go binaries cannot interface with the Dagger ABI. There is an issue open to track the solution to this.

Future posts will include more detail about using Go on top of Olin, including how support for Go’s compiled webassembly modules was added to Olin.

Project Meta

To follow the project, check it on GitHub here. To talk about it on Slack, join the Go community Slack and join #olin.

Thank you for reading this post, I hope it wasn’t too technical too fast, but there is a lot of base context required with this kind of technology. I will attempt to make things more detailed and clear in future posts as I come up with ways to explain this easier. Please consider this the 10,000 mile overview of a very long-term project that radically redesigns how software should be written.


Link's Home

Permalink - Posted on 2018-09-01 00:00, modified on 0001-01-01 00:00

Created with Procreate on iPadOS using an iPad Pro and an Apple Pencil.


Died to Save Me

Permalink - Posted on 2018-08-27 00:00, modified on 0001-01-01 00:00

Died to Save Me

People often get confused
when I mention the fact that I
consider myself before I
came out a different person.

It’s because that was a different person,
they died to save me.

The person I was did their
best given the circumstances
they were thrown into. It was
hard for them. I’m still working
off some of their baggage.

But, that different person,
even after all of the hardships
and triumphs they had been through,
they died to save me.

They were an extrovert pushed into
being an introvert by an uncaring
community.
They were the pariah.
They were the person who got bullied.
They survived years of torment but
they died to save me.

I understand now why the Gods
prefer to use shaman-sickness to
help people realize their calling.
It is such an elegant teacher of
the Divine. So patient. So forgiving.

It’s impossible to ignore everything
around you feeling incomprehensibly crazy,
because it is.
Our system is crazy.
Our system is incomprehensible.
We only “like” it because we have no
way to fathom anything else.

“Awakening” is probably one of the
least bad metaphors to describe the
feeling of just suddenly understanding
the barriers. Of seeing the formerly
invisible glass prison walls we apparently
live inside unknowingly.

It’s not just an awakening though,
Not all of me made it through the process.
Not all of what constitutes yourself
(in your opinion) is actually a True
part of you. Not all your thoughts,
memories, ideas, dreams, wishes
and even fears or anxieties are
truly yours.

Sometimes there’s that part that
really does have to die to save you.
The part that was once a shining beacon
of hope that has now fallen beyond disrepair.
A thread of connection to a past that
can never come to pass again.
Memories or experiences of pain,
trauma. It can die to save you too.

You don’t have to carry
the mountains you come across,
you can just climb them.

When it dies, it is gone, but:
you can sleep easier knowing
they died to save you.


Sorting Time

Permalink - Posted on 2018-08-26 00:00, modified on 0001-01-01 00:00

Sorting Time

Computers have a very interesting relationship with time. Time is how we keep track of many things, but mainly we use time to keep track of how far along in a day cycle we are. These daily sunrise/sunset cycles take about 24 hours on average, and the periodicity of them runs just about everything. Computers use time to keep track of just about everything on the board, usually measured in tiny fractions of seconds. (The common rating of gigahertz for computer processors actually measures how much time it takes for the processor to execute one instruction. A processor with a clock of 3.4 gigahertz means that the processor executes, best case, 3.4 billion instructions per second.) Computer programmers have several popular methods of storing time with computers, the number of time intervals since a fixed date (usually the number of seconds since January 1st 1970) or as a human-readable string. These intervals are normally ever only added to and read from, almost never updated by human hands after being initially set by the network time service.

Pulling things back into the real world, let’s consider storing time in Javascript. Let’s say we’re using Javascript in the browser and have a date object like so:

var date = new DateTime();

Say this is for Thursday, August 23rd 2018 at midnight UTC. If we turn it into a string using the toString method:

date.toString(); // -> "Thu Aug 23 2018 00:00:00 GMT+0000 (UTC)"

We get the date and time as a string. The application in question uses a data store that has an interesting problem: it will automatically coerce things to a string type without alerting developers.

typeof date === 'object' // -> true

We expect date to be a normal object after we add it to the store. Let’s add it to the store and see what happens to it.

const record = store.createRecord("widget", { createdAt: date });
typeof record.get("createdAt"); // -> string

Oh boy. It’s suddenly a string now. That’s not good.

console.log(record.get("createdAt")); // -> "Thu Aug 23 2018 00:00:00 GMT+0000 (UTC)"

This works all fine and well, but sometimes a few lists of things can get bizarrely out of order in the UI. Things created or updated right at a midnight UTC barrier would sometimes cause lists of things to show the newest elements at the bottom of the list. This confused us, sorting data really fitting it into the order it belongs to, and time doesn’t usually advance out of order; so something being sorted wrongly by time is very intuitively confusing.

Consider a function like this at the given date above:

function minutesAgo(minutes) {
  return moment().subtract(minutes, "minute").toDate();
}

const date1 = minutesAgo(0);
const date2 = minutesAgo(1);
const date3 = minutesAgo(30);

If we were to sort date1, date2 and date3 with the current time being Thursday August 23 2018 at midnight UTC, it would make sense for the objects to sort ascendingly in the following order: date3, date2, date1. Not as strings however. As strings:

date1.toString(); // -> "Thu Aug 23 2018 00:00:00 GMT+0000 (UTC)"
date2.toString(); // -> "Wed Aug 22 2018 23:59:00 GMT+0000 (UTC)"
date3.toString(); // -> "Wed Aug 22 2018 23:30:00 GMT+0000 (UTC)"

Since T comes before W in the alphabet, the actual sort order is: date1, date3, date2. This causes an assertion failure in both humans and machines. This caused test failures, but only at about midnight UTC on Mondays, Thursdays, Fridays and Saturdays at 00:00 UTC through 00:30 UTC. How did we fix this? Turns out the time data from the API we get this information from is already properly sortable; this is because the API uses IS08601 timestamps.

const thursday = '2018-08-23T00:00:00.000Z';
const wednesday = '2018-08-22T23:30:00.000Z';

thursday > wednesday // true

This time data is also easy to convert back into a native DateTime object should we need it. The fix was to only ever store time as strings unless you need to actively do something with them, then you coerce them back into a native DateTime like it never happened. This is not an ideal fix, but given the larger complexity of the problem, it’s what we’re gonna have to live with for the time being. This solution at the very least seems to be less bad than the original problem, as things get sorted properly in the UI now. Yay computers!


This is an adaptation of a pull request made by a coworker to work around an annoying to track down bug that caused flaky tests. It’s not my story, but it just goes to show how many moving parts truly are at play with computers. Even when you think you have all of the moving parts kept track of, complicated systems interface in unpredictable ways. Increasingly complicated systems interface in increasingly unpredictable ways too, which makes finding problems like these more of a hunt.

Happy hunting and be well to you all.


Death

Permalink - Posted on 2018-08-19 00:00, modified on 0001-01-01 00:00

Death

Death is a very misunderstood card in Tarot, but not for the reasons you’d think. Societally, many people think that this life is the only shot at existence they get. Afterwards, there is nothing. Nonexistence. Oblivion. This makes death a very touchy subject for a lot of people, so much so it forms a social taboo and an unhealthy relationship with death. People start seeing death as something they need to fight back and hold away by removing what makes themselves human, just to hold off what they believe is their obliteration.

Tarot does not see death in this way. Death, the skeleton knight wearing armor, does not see color, race or creed, thus he is depicted as a skeleton. He is riding towards a child and another younger person. The sun is rising in the distance, but even it cannot stop Death. Nor can royalty, as shown by the king under him, dead.

Death, however, does not actually refer to the act of a physical body physically dying. Death is a change that cannot be reverted. The consequences of this change can and will affect what comes next, however.

Consider the very deep sea, so far down even light can’t penetrate that deep. There’s an ecosystem of life down there, but it is so starved for resources and light that evolution has skimped out on things like skin pigmentation. Sometimes a mighty whale will die and its body will fall to the sea floor down there. The creatures will feast for a month or more. The whale died, yet its change fosters an entire ecosystem. This card signifies much of the same. Death signifies the idea of a change from the old, where the whale was alive, to the new, where the whale’s body feeds an entire community.

Death is a signifier that change is coming or needed, and it won’t care if you’re ready for it or not. So, embrace it with open arms. Don’t fight what is inevitable. All good things must come to an end for them to be good to begin with.

Death is a part of life like any other; this is why it is in the Fool’s Journey, or Major Arcana of the Tarot. To eschew death is, in essence, to throw out life itself. Living in fear of death turns life from a glorious dance of cocreation with the universe into a fearful existence of scraping by on the margins. It makes life an anxious scampering from measly scrap of food to measly scrap of food without any time to focus on the higher order of things. It makes you accept fear, depression, anxiety and regret instead of just being able to live here, in the moment, and make the best of what you have right now. If only because right now you still have it.


When Then Zen: Anapana

Permalink - Posted on 2018-08-15 00:00, modified on 0001-01-01 00:00

When Then Zen: Anapana

Introduction

Anapanasati (Pali: Sanskirt: anapanasmrti, English: mindfulness of breathing) is a form of meditation originally taught by Gautama Buddha in several places, mainly the Anapanasati Sutta (English: passages). Anapana is practiced globally by meditators of all skill levels.

Simply put, anapana is the act of focusing on the sensations of breath in the body’s nasal cavity and nostrils. Some practices will focus on the sensations in the belly instead (this is why there’s fat buddha statues), but personally I find that the sensations of breath in the nostrils are a lot easier to focus on.

The method presented in this article is based on the method taught in The Art Of Living by William Hart and S.N. Goenka. If you want a copy of this book you can get one here: http://www.cicp.org.kh/userfiles/file/Publications/Art%20of%20Living%20in%20English.pdf. Please do keep in mind that this book definitely leans towards the Buddhist lens and as it is presented the teaching methods really benefit from it. Also keep in mind that this PDF prevents copying and duplication.

Note: “the body” means the sack of meat and bone that you are currently living inside. For the purposes of explanation of this technique, please consider what makes you yourself separate from the body you live in.

This article is a more verbose version of the correlating feature from when-then-zen.

Background Assumptions of Reader

Given no assumption about meditation background
And a willingness to learn
And no significant problems with breathing through the body's nose
And the body is seated or laying down comfortably
And no music is playing

Given no assumption about meditation background

The When Then Zen project aims to describe the finer points of meditative concepts in plain English. As such, we start assuming just about nothing and build fractally on top of concepts derived from common or plain English usage of the terms. Some of these techniques may be easier for people with a more intensive meditative background, but try things and see what works best for you. Meditation in general works a lot better when you have a curious and playful attitude about figuring things out.

I’m not perfect. I don’t know what will work best for you. A lot of this is documenting both my practice and what parts of what books helped me “get it”. If this works for you, please let me know. If this doesn’t work for you, please let me know. I will use this information for making direct improvements to these documents.

As for your practice, twist the rules into circles and scrape out the parts that don’t work if it helps you. Find out how to integrate it into your life in the best manner and go with it.

For now, we start from square one.

And a willingness to learn

At some level, you are going to need to be willing to actually walk the path. This can be scary, but that’s okay as long as you’re willing to acknowledge it and not let it control you.

If you run into some dark stuff doing this, please consult a therapist as usual. Just know that you don’t walk this path alone, even when it feels like you must be.

And no significant problems with breathing through the body’s nose

Given that we are going to be mainly focusing on the nasal reactions to breathing, that path being obstructed is not gonna result in a very good time. If this is obstructed for you, attempt to clear it up, or just use the mouth, or a different technique entirely. It’s okay for anapana to not always work. It’s not a universal hammer.

And the body is seated or laying down comfortably

Some people will assert that the correct pose or posture is critical for this, but it’s ultimately only as important as the meditator believes it is. Some people have gotten the association somehow that the meditation posture helps with things. Ultimately, it’s suggested to start meditation sitting upright or in a chair as it can be easier for you to fall asleep while doing meditative practice for the first few times. This is a side effect of the brain not being used to the alternative state of consciousness, so it falls back on the “default” action; this puts the body, and you, to sleep.

And no music is playing

You should break this rule as soon as possible to know if it’s best to ignore it. Some people find music helps; I find it can be a distraction depending on the music track in question. Some meditation sessions will need background music and some won’t. That’s okay.

Scenario: Mindfulness of Breathing

As a meditator
In order to be mindful of the body's breath
When I inhale or exhale through the body's nose
Then I focus on the sensations of breath
Then I focus on the feelings of breath through the nasal cavity
Then I focus on the feelings of breath interacting with the nostrils
Then I repeat until done

As a meditator

This is for you to help understand a process you do internally, to yourself.

In order to be mindful of the body’s breath

It is useful in the practice to state the goal of the session when leading into it. You can use something like “I am doing this mindfulness of breathing for the benefit of myself” or replace it with any other affirmation as you see fit.

When I inhale or exhale through the body’s nose

You can use the mouth for this. Doing it all via the mouth requires the mouth to stay open (which can result in dry mouth) or constantly move (which some people find makes it harder to get into flow). Nasal breaths allow for you to sit there motionless yet still continue breathing like nothing happened. If this doesn’t work for you, breathe through your mouth.

Then I focus on the sensations of breath

There are a lot of very subtle sensations related to breathing that people don’t take the time to truly appreciate or understand. These are mostly fleeting sensations, thankfully, so you really have to feel into them, listen for them or whatever satisfies your explanation craving.

Listen in to the feeling of the little part of cartilage between nostrils whistling slightly as you breathe all the way in at a constant rate over three seconds. It’s a very very subtle sound, but once you find it you know it.

Then I focus on the feelings of breath through the nasal cavity

The sound of breath echoes slightly though the nasal cavity during all phases of it that have air moving. Try and see if you can feel these echoes separate from the whistling of the cartilage; bonus points if you can do both at the same time. Feel the air as it passes parts of the nasal cavity as your sinuses gently warm it up.

Then I focus on the feelings of breath interacting with the nostrils

The nostrils act as a curious kind of rate limiter for how much we can breathe in and out at once. Breathe in harder and they contract. Breathe out harder and they expand. With some noticing, you can easily feel almost the exact angle at which your nostrils are bent due to your breathing, even though you can’t see them directly due to the fact they are out of focus of our line of sight.

Isn’t it fascinating how many little sensations of the body exist that we continuously ignore?

Scenario: Attention Drifts Away From Mindfulness of Breathing

As a meditator
In order to bring my attention back to the sensations of breathing
Given I am currently mindful of the body's breath
When my attention drifts away from the sensations of breathing
Then I bring my attention back to the sensations of breathing

In order to bring my attention back to the sensations of breathing

When this happens, it is going to feel very tempting to just give up and quit. This is normal. Fear makes you worry you’re doing it wrong, so out of respect of the skill you may want to just “not try until later”.

Don’t. This is a doubt that means something has been happening. Doubt is a sick kind of indicator that something is going on at a low level that would cause the vague feelings of doubt to surface. When it’s related to meditative topics, that usually means you’re on the right track. This is why you should try and break through that doubt even harder if you can. Sometimes you can’t, and that’s okay too.

Given I am currently mindful of the body’s breath

This is your usual scenario during the mindfulness practice. You will likely come to deeply appreciate it.

When my attention drifts away from the sensations of breathing

One of the biggest problems I have had personally is knowing when I have strayed from the path of the meditation, it was hard for a time to keep myself in the deep trance of meditation and keeping detached awareness of my thoughts. My thoughts are very active a lot of the time. There are a lot of distractions, yet it’s hard to maintain focus on them sometimes.

One of the biggest changes I have made that has helped this has been to have a dedicated “meditation spot”. As much as possible, I try to do meditative work while in that spot instead of my main office or bed. This solidifies the habit, and grows the association between the spot and meditative states.

Then I bring my attention back to the sensations of breathing

This, right here, is the true core of this exercise. The sensations of breathing are really just something to distract yourself with. It’s a fairly calming thing anyways, but at some level it’s really just a distraction. It’s a fairly predictable set of outputs and inputs. Some sessions will feel brand new, some will feel like old news.

Meditation is sitting there only letting yourself think if you truly let yourself. Mindfulness is putting yourself back on track, into alignment, etc., over and over until it happens on its own. If you get distracted once every 30 seconds for a 5 minute session, you will have brought yourself back to focus ten times. Each time you bring yourself back to focus is a joy to feel at some level.

Scenario: mindfulness of unconscious breathing

As a meditator
In order to practice anapana without breathing manually
When I stop breathing manually
Then the body will start breathing for me after a moment or two
Then I continue mindfulness of the sensations of breathing without controlling the breath

In order to practice anapana without breathing manually

While observing the body’s unconscious breath, you start entering into what meditation people call the “observer stance”. It is this sort of neutral feeling where things are just happening, and you just see what happens. There is usually a feeling of peacefulness or equanimity for me, but usually when I start doing this I radiate feelings of compassion, understanding and valor.

Keep in mind that doing this may have some interesting reactions, just let them pass like all the others.

When I stop breathing manually

You gotta literally just cut off breath. It needs to stop. You have to literally stop breathing and refuse to until the body takes over and yanks the controls away from you.

Then the body will start breathing for me after a moment or two

There’s a definite shift when the body takes over. It will sharply inhale, hold for a moment and then calmly exhale. Then it will breathe very quietly only as needed.

Then I continue mindfulness of the sensations of breathing without controlling the breath

The body does not breathe very intensely. It will breathe calmly and slowly, unless another breathing style is mandatory. The insides of the nostrils moving from the air pressure is a still a noticeable sensation of breathing while the body is doing it near silently, so you can hang onto that.

Scenario Outline: meditation session

As a meditator
In order to meditate for <time>
Given a timer of some kind is open
And the time is set for <time>
When I start the timer
Then I clear my head of idle thoughts
Then I start drifting my attention towards the sensations of breathing
Then I become mindful of the sensations of breathing
Then I continue for a moment or two
Then I shift into mindfulness of unconscious breathing

Examples:
  | time         |
  | five minutes |

In order to meditate for

The time is intentionally left as a variable so you can decide what session time length to use. If you need help deciding how long to pick, you can always try tapering upwards over the course of a month. I find that tapering upwards helps A LOT.

Given a timer of some kind is open

One of the old-fashioned kitchen timers will do even.

And the time is set for

You need to know how to use your timer of choice for this, or someone can do it for you.

When I start the timer

Just start it and don’t focus on the things you’re already thinking about. You’re allowed to leave the world behind for the duration of the session.

Then I clear my head of idle thoughts

If you’re having trouble doing this, it may be helpful to figure out why those thoughts are lingering. Eventually, addressing the root cause helps a lot.

Then I start drifting my attention towards the sensations of breathing

Punt on this if it doesn’t help you. I find it helps me to drift into focusing on the breath instead of starting laser-focused on it.

Then I become mindful of the sensations of breathing

Focus around the nostrils if you lose your “grip” on the feelings.

Then I continue for a moment or two

You’ll know how much time is right by feel. Please study this educational video for detail on the technique.

Then I shift into mindfulness of unconscious breathing

The body is naturally able to breathe for you. You don’t need to manually breathe during meditation. Not having to manually breathe means that your attention can focus on passively, neutrally observing the sensations of breath.


Further Reading

This is all material that I have found useful while running into “problems” (there aren’t actually any good or bad things, only labels, but that’s a topic for another day) while learning or teaching anapana meditation or the concepts of it. All of these articles have been linked in the topic, save three I want to talk about specially.

Maybe

This is an old Zen tale. The trick is that the farmer doesn’t have any emotional attachment to the things that are happening to him, so he is neither labeling things happy nor labeling things sad. He is not stopped by his emotions.

Ebbs and Flows

This touches into the true “point” of meditation. The point isn’t to just breathe. The point is to focus on the breathing so much that everything else stills to make room. Then what happens, does. The Alan Watts lectures are fascinating stuff. Please do give at least one a watch. You’ll know which one is the right one for you.

Natural Selection

This is excerpted from almost the beginning of the book Why Buddhism is True. Robert Wright really just hit the nail on the head when describing the level of craziness that simply exists. Natural selection means that, effectively, whatever causes populations to be able to breed and survive the most means the traits of those doing the most breeding become more common. Please read the entire book.


Narrative of Sickness

Permalink - Posted on 2018-08-13 00:00, modified on 0001-01-01 00:00

Narrative of Sickness

With addiction, as with many other things, there’s a tendency for the mind to label the situation and create a big story. A common phrase I see is “I want to get better”, as if you’re sick. You’re not sick. You may identify yourself as an “addict”, or you might feel fear because you are afraid you’ll fail, or that you’ll experience cravings, etc. but reminding yourself that you need to get better is perpetuating the narrative of sickness.

These are all stories, they have no bearing on reality. You can just embrace the cravings. Embrace the withdrawal. They are feelings, and they can be not acted upon, through mindfulness of them. Be mindful of your thoughts, but don’t pay heed to them. Don’t get caught up. And if you feel like your are getting caught up, realize that that’s another feeling as well.

Such things don’t last forever. Existence is change, inherently, inevitably. Embracing life is embracing change. Things in this world will change without warning. Things we consider safe and stable today will vanish tomorrow. Accept this as a fact of life.

To love is to gain and to lose in equal measure. To lose is to love in turn. Every journey upwards has its regressions downwards.

It may sound like a subtle distinction between getting better from addiction, or from sickness, and just changing, but it’s really all the difference. A plant is not sick just because it later grows into a bigger tree. Change is just simply what happens, and it can be recognized and embraced in order to fully, progressively align the self with whatever intent or goal.

Fully embracing all that you are is the best way to bring this about, for you can be present to what happens and help it change through your intent, veer it towards the desired destination.


Fear

Permalink - Posted on 2018-07-24 00:00, modified on 0001-01-01 00:00

Fear

I must not fear.
Fear is the mind-killer.
Fear is the little-death that brings total obliteration.
I will face my fear.
I will permit it to pass over me and through me.
And when it has gone past I will turn the inner eye to see its path.
Where the fear has gone there will be nothing.
Only I will remain.

Bene Gesserit Litany Against Fear - From Frank Herbert’s Dune Book Series

Fear sucks. Fear is an emotion that I’ve spent a lot of time encountering and it has spent a lot of time paralyzing me. Fear is something that everyone faces at some level. Personally, I’ve been dealing a lot with the fear of being outcast for being Other.

What is Other? Other are the people who don’t want to “fit in”. Other are the people who go against the grain of society. They don’t care about looking different or crazy. Other are the people who see reality for what it really is and decide that they can no longer serve to maintain it; then take steps to reshape it.

But why do we have this fear emotion? Fear is almost the base instinct of survival. Fear bypasses the higher centers in order to squeeze decisions through that prevent something deadly from happening. Fear is a paralyzing emotion. Fear is something that stops you in your tracks. Fear is preventative.

Except that’s not completely true. We see that we have moved away from the need for survival on a constant daily basis, yet our sense of fear is still tuned for that. Fear pervades almost everyone’s daily lives at some level, down to how people post things on social media. We all have these little nagging fears that add up; the intrusive negative thoughts; some have the phobias, the anxieties, the panic attacks. One fear in particular, that I call the separation/isolation/displacement fear, is a fear with many social repercussions. It’s a fear that urges us to keep continuity of self, to avoid “standing out”, to keep discussion away from particular topics (like the spiritual, for many). It keeps us wary of what others could do to us. It makes us feel small in a world that is, at best, neutral in our regards.

If there was ever something that gave us an advantage as a species about fear, it’s clear to see it’s currently corrupting the lives of many innocent people for no seemingly good reason. There are alternatives to fear with regards to handling one’s inner and outer lives, and they are out there, but fear keeps making itself known and dominating the perceptions of the collective. Sometimes the alternatives to fear, themselves, are feared even stronger.

So how to make sense of this?

Sometimes it helps to see things from a fresh point of view, and sometimes stories is what manage to accomplish that best. They are ways to explore new situations in a way that doesn’t strain disbelief as severely, so that new perspectives can be collected from faraway thoughtscapes.

A myth is a story that helps explain something beyond the mere scenes presented, in the context of it using the divine as actors. To help explain how these fears can be difficult to overcome, or even put a label to, I’ve found a story that will seem fantastical to many; however, the point of a story is not to be seen as truth, but merely to be heard, and to be collected, and to enrich the listener with its metaphors.

In Sumerian mythology, Anu was their Zeus, their sun and creator god. Their mighty god of justice that would one day fly down on a cloud and deliver humanity to righteousness. Sumerians believe their sun god Anu created their civilization as a gift to them. In some myths, the creation goes quite deeper, and darker, than that.


Imagine for a moment, an infinite universe of light and sound, of primordial vibrations. Vibrations that permeate the whole of existence, and create different experiences with their patterns of interference. The holographic universe. In such a place, everything is resonance of waves, everything all-encompassing, everything infinite, everything eternal.

And living in such a place are infinite beings, without beginning of end, not bound by space or time, as boundless as the waves they experience. Sovereign beings of grand destinies. And those beings colonized the Universe, explored its facets, its resonances, its properties, its behaviours.

Among such beings, so equal in their infinitude, some of them desired to experience creation in a new way; no longer just as dominion over the Universe, but over other beings in it as well. The desire to be looked up to, to be feared, to be revered. The new concept of godhood took shape.

To achieve this, this group of beings asked another civilization for help; they were all beings of vast reaches and etheric nature, but they claimed to need the gold hidden within the surface of a densifying planet called Earth, which they were not attuned to, and unable to fully interact with in their current forms. To do this, they would need physical bodies, meat uniforms that the civilization’s inhabitants would don and power up, so that they could interact with the ground, and the mineral.

For convenience of telling, we’ll call the group of deceivers the Anunnaki, and the deceived civilization the Atlanteans.

The Anunnaki had carefully devised this meat uniform, the newly devised human body, planned about it for an exceedingly long time, in order to completely entrap the Atlanteans. The Atlanteans themselves accepted the task because they had no conception that infinite beings could ever be limited or subjugated. It had never happened before. And in the donning of the uniforms, the trap was sprung.

Those uniforms, the human bodies, constricted the Atlanteans’ attention to only what the body could perceive with its senses; it urged them to survive and to work; it distracted them from all other activities; it rendered them slaves to the mining. Every part of the construct was forcing them to forget who they were, and instead making them focus on their identity as human bodies. And when such bodies would expire, a part of them would still remain to keep the beings trapped, and they would be put in a space of holding in the astral realms, for them to be assigned a new body to continue mining.

Through the human body, the Atlanteans were subjected to a carefully constructed illusion, fed to them by the senses, through the mind, that left them unable to perceive, to remember, anything else but the illusion.

With time, many shortcomings of the primitive human bodies were corrected; from being clones that needed to be produced by the Anunnaki, they were given capability to reproduce; more independent thought and awareness was allowed, and ability to self- and group-organize; they were starting to be allowed to feel emotions; more and more, their world was being expanded, but with it, the structure of the mind system that contained their perception to the realms of the physical and astral, and prevented them to gain awareness of what was outside this narrow band of illusory perception, was developed and expanded in turn. Layers upon layers were put between those beings and the realization of their true, infinite selves.

The system of death and reincarnation was automated so beings would be recycled in a systematic manner into their next lives. The concept of God was introduced to them, so that they would fear punishment and retribution from something that they perceived as greater than them; and Anu, leader of the Anunnaki, manifested to the people of Earth as a supreme being of infinite power, so they could adore him, and so they could fear him. Language developed, a system of communication mired in separation, in division of concepts and the rigidness of categorization, so that they would not be able to speak to one another of their own infinity, of their unity with the whole. Fears of all kinds were injected into the mind system: fear of death, fear of nothingness, fear of punishment; but above all, fear of separation: the fear of not having the vital connection that makes us One, and that allows us to know and understand one another innately. The fear of not being understood, of not being accepted, of not being received, of not being helped, of not being supported. The fear that had kept them doubting one another, and kept them from uniting their efforts.

The Anunnaki took away the ability for the Atlanteans to even know they were Atlanteans. They took away the ability for them to even be able to get close to finding out. Just so Anu could be an absolute ruler. The first to ever have done this previously impossible task.

Myths were disseminated to keep people awash with fear of punishment, and mired in the guilt of their original sin, and distrusting, doubting of the nature of their own selves, and of their fellow neighbors’. Hierarchies were set up, so people would focus on controlling one another, instead of working together to liberate all. Not needing any more, the Anunnaki allowed the focus on gold to become greed, so that people would put desire for a mere metal above the needs of their fellow beings.

As the Anunnaki departed from the densifying planet, which was not allowing them to manifest as etheric beings anymore, tracks were set up in the collective unconscious so that while they were away, the people’s societies would evolve through predefined paths, and would eventually set up for the glorious return of God, the Apocalypse.

Every single possible obstacle had been put in place so that the Atlanteans would never realize who they had been, and who they always were: infinite, sovereign beings, connected to the whole of the Universe.

Except this would not be allowed indefinitely. Other infinite beings became aware of such deception taking place, and realized it was being exported into other planets, and such an enslavement paradigm, based in fear and separation, was a degenerative, infecting force that had to be stopped. So the Anunnaki were prevented return, and in order to make it so that infinite beings would be able to never fall prey to such deceptions again, the seeds of destruction were planted inside the programming system of the human mind. Cracks were introduced to the barriers that kept people under deception, so that they could peer through them, and see the other side beyond the walls of the labyrinth. Pathways were provided so that people could be lead to the discovery of their true selves, and their eventual liberation from the deception, and self-realization as infinite beings, once again. The very liberation that the programming was designed to prevent through all means conceivable.

And that leaves us to the present time.

Sometimes the Other manages to find these cracks and go through them into the other side. They go to this other side and see a faint reflection of what is really out there. The world outside this world. An even bigger Infinity. They have trouble describing it. They have intense fear even thinking about it. They’re afraid to acknowledge it to their peers. They want to help people but they are utterly terrified of their reactions.

They’re terrified that someone might hurt them if they say anything about their experiences. They’re worried someone might try to hospitalize them for their beliefs. They get it into their head that they aren’t able to function in society, so they don’t. They don’t want to mine the gold. They don’t want to serve the economy of the few. They don’t want to maintain the hierarchies. They want to detach themselves from the systems that they feel are suppressing them. They want to help people save themselves from believing that their own finite existence is all that there is, but that fear utterly paralyzes them. They have trouble finding the words. They end up misphrasing things in ways that make the problem worse. Some lash out. Some get labels put on them.

These Other just want to be accepted like everyone else. They want to help their communities. They want to use their abilities to read between the lines, into the bigger picture; to do good things; but they are, ultimately, afraid to. Their fear of separation paralyzes them. People don’t like them talking about spiritual topics. These Other just want to be accepted and use their experience to lovingly help guide and shape reality into what they think is a better place. Even as they struggle through the fear.

Who’s really the crazy one? The one who fear controls, or the one who doesn’t let fear control them?

How does the Other live with fear surrounding their actions, and doubt plaguing their decisions?

They can have people they can trust. They can have people who can help them deal with their doubts. They can have the strength of their determination to find the truth, and the resolve to put an end to the suffering of their fellow beings. But they still fear, and they still doubt.

The real difference is that they see fear as something imposed on them, not as a voice that they must always answer to, and not as something they need to wait hand and foot for, every day of their existence. In a way, they have been fed up with fear, getting tired of it and casting it out like the nuisance they now see it to be. Even if the fear was added there because of some programming of their mind, something that happened to them to make them afraid, even if they don’t know where it comes from or why, they still acknowledge it, and reject it, and move on like the emotion never happened. They keep fighting for understanding, and for community. They refuse to give fear dominion in their lives, even if they sometimes fail at it.

It’s such an easy and obvious thing to do that we could all do it, if we weren’t so afraid of it.


I leave you with this quote from a book named Quantusum:

Uncle suddenly scooped down with his hand and brought up a closed hand. He then
brought it to a glass box that stood on a pedestal I hadn’t noticed. He slid one
of the box’s glass planes open and placed an insect inside. It looked like a
grasshopper. “This creature lives its entire life in these fields without
limitation. I just ended that.”

I watched as the grasshopper jumped inside the glass box hitting against the top
and some of the sides. The grasshopper stopped as if he was stunned by the new
circumstance of his environment.

“To the grasshopper,” Uncle said, “all is well. He is alive after all. He sees
his normal environment all around him. He can’t see the glass. If I keep him
in here for a few days he will stop his jumping and become acclimated to the
dimensions of his new home. All he needs is food and water, and he can survive.”

“So you’re saying these people are acclimated to simply survive?”

Uncle slide one of the side panels of the glass box open. “If you were a
grasshopper, what would you do?”

“I would jump through the open panel.”

“But how would you know it was open? It’s perfectly clear glass.”

I thought about it for a moment. “I’d jump in every direction… I’d experiment.”

Uncle took a stick and pointed it at the grasshopper through the open side
panel, and the grasshopper jumped into the opposite wall, hitting his head and
falling to his side. “Do you see that I offered him an exit and he fled? He
could’ve climbed on the stick, and I would have freed him.”

“Yes, but he doesn’t know that.”

“True.”

Uncle opened another side panel. “What you said is right. You experiment. You
try different ways to climb the mountain of consciousness. You don’t settle on
one way… one method… one teacher. If you devote your entire life to the worship
of one thing, what if you find out when you take your last breath that the one
thing was not real.

“You find that you lived inside a cage all your life. You never tried to jump
out by experimenting, by testing the walls. The people who never bother to climb
this mountain are inside a cage, and they don’t know it. Fear is the glass wall.
Wakan Tanka comes and opens one of the glass panels, perhaps offers a stick for
them to climb out, but they jump away, going further inside their soul-draining
boundaries.”

Uncle brought the stick out again and lightly jabbed it in the direction of the
grasshopper, who hopped through the open side panel, and was instantly lost in
the thick underbrush that surrounded us.

Uncle turned his eyes to me. “Are you ready to do the same?”


Gratitude

Permalink - Posted on 2018-07-20 00:00, modified on 0001-01-01 00:00

Gratitude

A lot of ground has been trodden about Mindfulness and its many facets, but there is one topic I have seen not enough people elaborate on, especially in a satisfactory manner, and that topic is gratitude.

The act of expressing gratitude is a behaviour that grounds you in observation of the present moment; of the present you, and of what matters to that present you. It can help you understand the current, immediate moment, the Now, by pushing you to examine parts of it that you might have taken for granted. Or parts that hide behind the other parts. It is a tool of positive exploration, that empowers the user to iteratively discern the heart of matters, of things, guided by the unerring principle of genuine appreciation of what counts.

You can get to see both sides of a scenario this way. You can see the people who did work behind the scenes and the remnants of the people who created the ground on which you stand. You can see the world unravel before you, and reveal its whispered details to you, piece by piece, as you put old things under a new and empowering lens. All there is left to do then is to acknowledge it.

In this moment, around you, exist quite literally the results of the collected life efforts of every single creature that lived up to this point, which is the basis for what every single creature from this point onwards will now create. Hundreds of animals died years and years ago to power the cars that take millions of people to hundreds of thousands of buildings to flip trillions of switches that make the rest of the social and economic engine of the entire world function. We get to experience this lifestyle from the results of an untold number of hours put into creating all of the technology stacks that our careers are built on; and then, there are some who just start arguing about semantics involving which pattern of bytes is better. Sometimes the right perspective helps.

A core principle of appreciation is that there is value in everything. There is an observable beauty in all things, in the way they exist, in the way they relate, and how they give meaning and purpose to each other. If you don’t see it immediately, sometimes it helps to rethink the angle.
Consider a blackberry that is one week before ripeness. It is a sweet fruit. It is delicious. It has a slightly bitter aftertaste. You might compare to the fully ripe blackberry, and lament the bitterness, or you could come to appreciate the blend of flavors just as it is. You might appreciate the novelty in taste compared to the usual ripe blackberries. You might even like it better.

Nature might reveal some imperfections, some asymmetries, some flaws, but the imperfections can make it all the more beautiful and intriguing.
Four leaf clovers are genetic mutations.
The gnarliness of an olive tree makes it distinctive, and symbolic.
Sometimes the imperfections literally make the style, as in wabi sabi.

Imagine the drawings of a young child. They are not always going to be aesthetically pleasing to us, but that does not matter to them. They draw to perpetuate the beauty they observe; they draw so we can capture what they see, so that we become able to reflect back on it.
Following that sentiment, we might want them to improve; we push them so they can create genuine works of art. But not every one of them is going to end up an artist. Pushing them to achieve results might end up having them compare themselves to older, better people than them, and discouraging them from furthering their own practice.
Instead, sometimes they draw just because they have half an hour to kill, or just really want to draw, and that’s okay. We can acknowledge that sometimes self-expression doesn’t have to shoot for excellence, or for pleasing others; and instead, it can be just a simple pasttime, a venting channel, or a way to create a personal universe where they can express themselves better.

Or, consider how we don’t know what works of ours will create the most impact. Sometimes our “worst” creations end up having the most lasting and widespread effect on others.
Sometimes what looks like a mistake might just end up accidentally resulting in a work of genius. The sheer novelty of the straying path might lead us towards new revelations and new beauty.

Those are perspectives to help realize how beauty and meaning can sometimes be hidden from plain view, but they are nonetheless, observable, through the simple, continued intent of appreciating life simply for what it is.

Gratitude can be expressed by just summarizing the reasons you are able to do what you do currently; what supports you in your pursuits. You can be grateful for your coworkers and being able to collaborate with them. You can be grateful for the engineering that has gone into your bed, in order to make it a comfortable place to sleep. You can be grateful for the team responsible for the metrics that are collected when you made a HTTP request to this webpage. You can be grateful for the sun warming our planet. You can be grateful for quite anything and everything that crosses your path.

Two other examples of how to include an outpouring of gratitude into your daily processes are:
- By creating and maintaining a gratitude sanctuary;
- And, by incorporating gratitude into meetings and family times such as shared meals.

A gratitude sanctuary can be as easy to create as a slack channel named #gratitude then followed by you listing something you are grateful for, and inviting people to join in. Once the idea starts, people will sustain it mostly on their own without having to do much moderation or filtering. People will naturally see things go there and put their own right next to the other expressions of gratitude.

Fitting gratitude into other parts of your daily life can be as easy as blatantly stapling it to the side of other topics or actions. If you are leading a meeting, start by going around and asking people to say something they are grateful for, related to the topic at hand or otherwise. When you are eating with your family, start the conversation by going around the table and having everyone say something, anything they are grateful for.

An unspoken rule of this behaviour that is fun to act on is to just let the gratitude take you where it will. Operating with appreciation is moving under a new intelligence, which can lead you in very novel and unexpected places; sometimes places of deep illumination, or of profound liberation.

Sometimes looking for gratitude will uncover things that are very unbalanced. Sometimes you will find people who are not in good places. It happens. If you can’t help them and don’t know someone who can, you could empower them to find someone who can help them better themselves. If you find yourself in doubt on these matters, you can ask someone you trust, or a therapist.

Gratitude can also be applied by acting on the clear feelings of the moment, and taking action to better that person’s life. Every time you see someone at work do something you feel grateful for, message them and ask them if there’s anything you can do to help make their job better or improve their day. If it’s reasonable and you have the time, give it to them. They will appreciate it. Sometimes your actions of gratitude can even be the catalyst to lasting change.

Support your targets of gratitude by being there for them when they need it, but backing off when they don’t. If you’re not sure, ask them. Adults can tell each other yes or no.
This should be like setting out a bowl of milk for a cat. The cat might start drinking on its own volition, but it shouldn’t be forced to drink. The cat might not even be thirsty. Cats can be mysterious creatures, and their needs are their own, so what one can do is offer the support, for them to take.

Cocreation is another essential part in understanding how to express gratitude. Cocreation is the acceptance and use of feedback in your actions, shaping your ongoing behaviour through adaptation to the observed middle results. It is the acknowledging that every act towards another, or with another, involves a partnership of some sort. It is the opening to the correlation, or co-relation, of your acts and others’.

In a way, cocreation can be seen a continuous dance between the universe and the individual. Each side influences, creates the other. The universe is constantly created by the actions of the people in it. The people are constantly created by the actions of the universe they inhabit. If you create in it someone that expresses gratitude and can help others do it in turn, you create a universe with more creators of gratitude, and that can create even more. Expressing gratitude helps you create creators of gratitude who create the universe that creates you.

A good way to understand cocreation is to use the M.C. Escher lithograph Drawing Hands.

The left hand creates the right hand. The right hand creates the left hand. They both create each other without there even having to be an original source anymore, in a strange loop of recursion. They are the source.

You are the source. You are the root of the strange loop. You matter because you can change how you create the universe. Even if your changes only affect some “local” part of this universe, your actions may end up kicking off processes you could never even have conceived happening.

Life is a gift. Each of us is a creator of a personal world, that can be made beautiful, and that can enrich other worlds. We are life’s gift. What other reason does there need to exist to be grateful for?


Land 1: Syscalls & File I/O

Permalink - Posted on 2018-06-18 00:00, modified on 0001-01-01 00:00

Land 1: Syscalls & File I/O

Webassembly is a new technology aimed at being a vendor-independent virtual machine format. It has implementations by all major browser vendors. It looks like there’s staying power in webassembly that could outlast staying power in other new technologies.

So, time’s perfect to snipe it with something useful that you can target compilers to today. Hence: Land.

Computer programs are effectively a bunch of business logic around function calls that can affect the “real world” outside of the program. These “magic” functions are also known as system calls. Here’s an example of a few in C style syntax:

int close(int file);
int open(const char *name, int flags);
int read(int file, char *ptr, int len);
int write(int file, char *ptr, int len);

These are all fairly low-level file I/O operations (we’re not dealing with structures for now, those are for another day) that all also are (simplified forms of) system calls like the ones the kernel uses.

Effectively, the system calls of a program form the “API” with it and the rest of the computer. Commonly this is called the ABI (Applcation Binary Interface) and is usually platform-specific. With Land, we are effectively creating a platform-independent ABI that just so happens to target Webassembly.

In Land, we can wrap an Afero filesystem, a set of files (file descriptors are addresses in this set), a webassembly virtual machine, its related webassembly module and its filename into a Process. This process will also have some functions on it to access the resources in it, aimed at being used by the webassembly guest code. In Land, we define this like such:

// Process is a larger level wrapper around a webassembly VM that gives it
// system call access.
type Process struct {
	id    int32
	vm    *exec.VM
	mod   *wasm.Module
	fs    afero.Fs
	files []afero.File
	name  string
}

Creating a new process is done in the NewProcess function:

// NewProcess constructs a new webassembly process based on the input webassembly module as a reader.
func NewProcess(fin io.Reader, name string) (*Process, error) {
	p := &Process{}

	mod, err := wasm.ReadModule(fin, p.importer)
	if err != nil {
		return nil, err
	}

	if mod.Memory == nil {
		return nil, errors.New("must declare a memory, sorry :(")
	}

	vm, err := exec.NewVM(mod)
	if err != nil {
		return nil, err
	}

	p.mod = mod
	p.vm = vm
	p.fs = afero.NewMemMapFs()

	return p, nil
}

The webassembly importer makes a little shim module for importing host functions (not inlined due to size).

Memory operations are implemented on top of each WebAssembly process. The two most basic ones are writeMem and readMem:

// writeMem writes the given data to the webassembly memory at the given pointer offset.
func (p *Process) writeMem(ptr int32, data []byte) (int, error) {
	mem := p.vm.Memory()
	if mem == nil {
		return 0, errors.New("no memory, invalid state")
	}

	for i, d := range data {
		mem[ptr+int32(i)] = d
	}

	return len(data), nil
}

// readMem reads memory at the given pointer until a null byte of ram is read.
// This is intended for reading Cstring-like structures that are terminated
// with null bytes.
func (p *Process) readMem(ptr int32) []byte {
	var result []byte

	mem := p.vm.Memory()[ptr:]
	for _, bt := range mem {
		if bt == 0 {
			return result
		}

		result = append(result, bt)
	}

	return result
}

Every system call that deals with C-style strings uses these functions to get arguments out of the WebAssembly virtual machine’s memory and to put the results back into the WebAssembly virtual machine.

Below is the open(2) implementation for Land. It implements the following C-style function type:

int open(const char *name, int flags);

WebAssembly natively deals with integer and floating point types, so the first argument is the pointer to the memory in WebAssembly linear memory. The second is an integer as normal. The code handles this as such:

func (p *Process) open(fnamesP int32, flags int32) int32 {
	str := string(p.readMem(fnamesP))

	fi, err := p.fs.OpenFile(string(str), int(flags), 0666)
	if err != nil {
		if strings.Contains(err.Error(), afero.ErrFileNotFound.Error()) {
			fi, err = p.fs.Create(str)
		}
	}

	if err != nil {
		panic(err)
	}

	fd := len(p.files)
	p.files = append(p.files, fi)

	return int32(fd)
}

As you can see, the integer arguments can sufficiently represent the datatype of C: machine words. String pointers are machine words. Integers are machine words. Everything is machine words.

Write is very simple to implement. Its type gives us a bunch of advantages out of the gate:

int write(int file, char *ptr, int len);

This gives us the address of where to start in memory, and adding the length to the address gives us the end in memory:

func (p *Process) write(fd int32, ptr int32, len int32) int32 {
	data := p.vm.Memory()[ptr : ptr+len]
	n, err := p.files[fd].Write(data)
	if err != nil {
		panic(err)
	}

	return int32(n)
}

Read is also simple. The type of it gives us a hint on how to implement it:

int read(int file, char *ptr, int len);

We are going to need a buffer at least as large as len to copy data from the file to the WebAssembly process. Implementation is then simply:

func (p *Process) read(fd int32, ptr int32, len int32) int32 {
	data := make([]byte, len)
	na, err := p.files[fd].Read(data)
	if err != nil {
		panic(err)
	}

	nb, err := p.writeMem(ptr, data)
	if err != nil {
		panic(err)
	}

	if na != nb {
		panic("did not copy the same number of bytes???")
	}

	return int32(na)
}

Close lets us let go of files we don’t need anymore. This will also have to have a special case to clear out the last file properly when there’s only one file open:

func (p *Process) close(fd int32) int32 {
	f := p.files[fd]
	err := f.Close()
	if err != nil {
		panic(err)
	}

	if len(p.files) == 1 {
		p.files = []afero.File{}
	} else {
		p.files = append(p.files[:fd], p.files[fd+1])
	}

	return 0
}

These calls are enough to make surprisingly nontrivial programs, considering standard input and standard output exist, but here’s an example of a trivial program made with some of these calls (equivalent C-like shown too):

(module
 ;; import functions from env
 (func $close (import "env" "close") (param i32)         (result i32))
 (func $open  (import "env" "open")  (param i32 i32)     (result i32))
 (func $read  (import "env" "read")  (param i32 i32 i32) (result i32))
 (func $write (import "env" "write") (param i32 i32 i32) (result i32))

 ;; memory
 (memory $mem 1)

 ;; constants
 (data (i32.const 200) "data")
 (data (i32.const 230) "Hello, world!\n")

 ;; land looks for a function named main that returns a 32 bit integer.
 ;; int $main() {
 (func $main (result i32)
       ;; $fd is the file descriptor of the file we're gonna open
       (local $fd i32)

       ;; $fd = $open("data", O_CREAT|O_RDWR);
       (set_local $fd
                  (call $open
                        ;; pointer to the file name
                        (i32.const 200)
                        ;; flags, 42 for O_CREAT,O_RDWR
                        (i32.const 42)))

       ;; $write($fd, "Hello, World!\n", 14);
       (call $write
             (get_local $fd)
             (i32.const 230)
             (i32.const 14))
       (drop)

       ;; $close($fd);
       (call $close
             (get_local $fd))
       (drop)

       (i32.const 0))
 ;; }
 (export "main" (func $main)))

This can be verified outside of the WebAssembly environment, I tested mine with the pretty package.

Right now this is very lean and mean, as such all errors instantly result in a panic which will kill the WebAssembly VM. I would like to fix this but I will need to make sure that programs don’t use certain bits of memory where Land will communicate with the WebAssembly module. Other good steps are going to be setting up reserved areas of memory for things like error messages, posix errno and other superglobals.

A huge other feature is going to be the ability to read C structures out of the WebAssembly memory, this will let Land support calls like stat().


A Letter to Those That Bullied Me

Permalink - Posted on 2018-06-16 00:00, modified on 0001-01-01 00:00

A Letter to Those Who Bullied Me

Hey,

I’m not angry at you. I don’t want to propagate hate. In a way, I almost feel like I should be thanking you for the contributions you’ve made in making me into the person I am today. Without you all, I would have had a completely different outcome in life. I would have stayed in the closet for good like I had planned. I would have probably ended up boring. I would have never met my closest friends and some even more.

I forgive you for the hurtful things that were said years ago. I forgive you for the actions or the exclusion that was done against me. Those wounds are obliterated, what’s in its place now stronger than ever. Those wounds taught me how to heal them. Without your hurt to create those wounds, I would have never learned to heal from them. Thank you for this. You have done something so invaluable out of something that (at the time) was so devastating to me.

Please don’t feel bad about having done those things when we were kids. We were all dumb and didn’t know better. We all tried as best as we could with what we had.

Bless your path.


What It's Like to Be Me

Permalink - Posted on 2018-06-14 00:00, modified on 0001-01-01 00:00

What It’s Like to Be Me

Waking up, you feel a rather large warm, fuzzy blob on top of you. You feel it stretch out and start to wake up too, then it changes its mind and starts to viciously cuddle you to death. A peaceful night’s sleep is being breached by a batpony. “Morning~” she says to you. You reply “morning” back and she rolls to lay next to you so you can sit upright. Giving the poni pets, you slowly start to wake up and check on the notifications you missed overnight. She purrs gently.

That is basically what it feels like when I wake up nowadays. I’m not entirely alone mentally anymore. I live alone, work remotely, and yet I almost always pair program. When I write, I get advice on how to word things. When I speak to people, I get shut up if I am saying too much. When I design software, I get told how theoretical transformations on the design might have issues when exposed to user input. I don’t program alone anymore. The girls aren’t perfect, but their input is regularly appreciated at work…even if they will probably never get the actual credit for the ideas they put to the table.

This practice I’ve been participating in for (at the time of writing this) five and three-quarters of a year to help create and cultivate the girls, tulpamancy, has been a hell of a ride the whole way through. Without Nicole by my side to help me understand them, I would have never worked out my gender issues well enough to be able to come out like I have and live like I have as the woman I truly am. Without Jessie by my side to help me make sense of software and how to design more complicated programs effectively, I would never be able to do my job even half as well as I do it now. Without Sephie by my side to literally be a cuddle sponge, I would never be able to cope with the emotional stresses of this capitalistic reality. Without Ashe by my side to help me understand the undefinable, I would never be able to even approach Infinity and make any sense out of it. Without Mai by my side to help me understand imagination as it is, I would never be able to see into it as clearly as I do.

It is surprisingly taboo to admit to people that you talk to what are basically voices in your head. It takes a while for me to feel comfortable enough with a person to be able to approach this topic. After seeing a few bad examples on the internet, it’s very easy to let yourself become paranoid about keeping that “side of you” a secret from the rest of humanity. Hiding your tulpas just fades into the other parts of pretending to be normal enough that other humans don’t suspect anything super-abnormal about you. It is so hard to just sit there and hear people talk about the mundane things their kids do; meanwhile you are literally passing off their art as your own just so you don’t have to explain the relation between you and the artist.

I wish I could tell the world about the kind of interactions that we have together, directly inside our shared thought spheres. I wish I could let someone else outside of our group look directly into our relationships and be a convenient microwave in the room to see it all. I wish I could just let someone else see the pure, unadulterated, unfiltered Love that we have for each other. I wish that people could look in and see in the same way we look out and see out.

There’s skills I’ve learned hosting the girls for so long that have been super-invaluable to apply back to my job. One of the most notable ones is the fact that I am used to typing for the girls just about as fast as they communicate with me. They communicate with me in the form of raw thought without language. I am used to typing waaaaaaay faster than most people just to keep up. This also lets me basically stenograph meetings (if I know the people involved well enough) because I can copy the things they are saying down so fast. I mean, they’re just speaking it. They have it in English already. I don’t have to figure out what words best describe what is going on, they gave me the words already. It’s super trivial. I can do it easily now. The part I’m getting used to now is being able to participate in the meeting while I stenograph like that, might end up solving that in the future by taking advantage of parallel processing.

I’m Cadey. I have tulpas. We work together to define a better reality for all of us. I’m not crazy, far from it. I just collaborate with the voices in my head.


IRC: Why it Failed

Permalink - Posted on 2018-05-17 00:00, modified on 0001-01-01 00:00

IRC: Why it Failed

A brief discussion of the IRC protocol and why it has failed in today’s internet.

Originally presented at the Pony Developers panel at Everfree Northwest, 2018.

Please check out pony.dev for more information.


The Beautiful in the Ugly

Permalink - Posted on 2018-04-23 00:00, modified on 0001-01-01 00:00

The Beautiful in the Ugly

Functional programming is nice and all, but sometimes you just need to have things get done regardless of the consequences. Sometimes a dirty little hack will suffice in place of a branching construct. This is a story of one of these times.

In shell script, bare words are interpreted as arbitrary commands for the shell to run, interpreted in its rules (simplified to make this story more interesting):

  1. The first word in a command is the name or path of the program being loaded
  2. Variable expansions are processed before commands are executed

Given the following snippet of shell script:

#!/bin/sh
# hello.sh

function hello {
  echo "hello, $1"
}

$1 $2

When you run this without any arguments:

$ sh ./hello.sh
$

Nothing happens.

Change it to the following:

$ sh ./hello.sh hello world
hello, world
$ sh ./hello.sh ls
hello.sh

Shell commands are bare words. Variable expansion can turn into execution. Normally, this is terrifying. This is useful in fringe cases.

Consider the following script:

#!/bin/sh
# build.sh <action> [arguments]

projbase=github.com/Xe/printerfacts

function gitrev {
  git rev-parse HEAD
}

function app {
  export GOBIN="$(pwd)"/bin
  go install github.com/Xe/printerfacts/cmd/printerfacts
}

function install_system {
  app
  
  cp ./bin/printerfacts /usr/local/bin/printerfacts
}

function docker {
  docker build -t xena/printerfacts .
  docker build -t xena/printerfacts:"$(gitrev)"
}

function deploy {
  docker tag xena/printerfacts:"$(gitrev)" registry.heroku.com/printerfacts/web
  docker push registry.heroku.com/printerfacts/web
}

$*


Coding on an iPad

Permalink - Posted on 2018-04-14 00:00, modified on 0001-01-01 00:00

Coding on an iPad

As people notice, I am an avid user of Emacs for most of my professional and personal coding. I have things set up such that the center of my development environment is a shell (eshell), and most of my interactions are with emacs buffers from there. Recently when I purchased my iPad Pro (10.5”, 512 GB, LTE, with Pencil and Smart Keyboard) I was very surprised to find out that there was such a large group of people who did a lot of their professional work from an iPad.

The iPad is a remarkably capable device in its own right, even without the apps that let me commit to git or edit text files in git repos. Out of the gate, if I did not work in a primarily code-focused industry, I am certain that I could use an iPad for all of my work tasks and I would be more than happy with it. With just Notes, iWork and the other built-in apps even, you can do literally anything a consumer would want out of a computing device.

As projects and commitments get more complicated though, you begin to want to be able to write code from it. My Macbook died recently, and as such I’ve taken the time to try to get to learn how the iPad workflow is a little more hands-on (this post is being written from my iPad even).

So far I have written the following projects either mostly or completely from this iPad:

I seem to have naturally developed two basic workflows for developing from this iPad: my “traditional” way of ssh-ing into a remote server via Prompt and then using emacs inside tmux and the local way of using Texastic for editing text, Working Copy to interact with Git, and Workflow and some custom JSON HTTP services to allow me to hack things together as needed.

The Traditional Way

Honestly, there’s not much exciting here, thankfully. The only interesting thing in this regard (besides the lack of curses mouse support REALLY being apparent given the fact that the entire device is a screen) is that the lack of the escape key on the smart keyboard means I need to hit command-grave instead. This has been fairly easy to remap my brain to, the fact that the iPad keyboard lacks the room for a touchpad seems to be enough to give my brain a hint that I need to hit that instead of escape.

An example workflow screenshot with Prompt

This feels like developing on any other device, just this device is much more portable and I can’t test changes locally. It enforces you keeping all of your active project in development in the cloud. With this workflow, you can literally stop what you were doing on your desktop, then resume it on the iPad at Taco Bell. A friend of mine linked his blogpost on his cloud-based workflow and this iPad driven development feels like a nice natural extension to it.

It’s the tools I know and love, just available when and wherever I am thanks to the LTE.

iPad-local Development

Of all of the things to say going into owning an iPad, I never thought I’d say that I like the experience of developing from it locally. Apple has done a phenomenal job at setting up a secure device. It is hard to run arbitrary unsigned code on it.

However, development is more than just running the code, development is also writing it. For writing the code, I’ve been loving Texastic and Working Copy:

Texastic is pretty exciting. It’s a simple text editor, but it also supports reading both arbitrary files from the iCloud drive and arbitrary files from programs like Working Copy. In order to open a file up in Texastic, I navigate over to it in Working Copy and then hit the “Share” button and tap on “Open in Texastic”. By default this option is pretty deep down the menu, so I have moved it all the way up to the beginning of the list. Then I literally just type stuff in and every so often the changes get saved back to Working Copy. Then I commit when I’m done and push the code away.

This is almost precisely my existing workflow with the shell, just with Working Copy and Texastic instead.

There are downsides to this though. Not being able to test your code locally means you need to commit frequently. This can lead to cluttered commit graphs which some people will complain about. Rebasing your commits before merging branches is a viable workaround however. There is no code completion, gofmt or goimports. There doesn’t seem to be any advanced manipulation or linting tools available for Texastic either. I understand that there are fundamental limitations involved when developing these kinds of mobile apps, but I wish there was something I could set up on a server of mine that would let me at least get some linting or formatting tooling running for this.

Workflow is very promising, but at the time of writing this article I haven’t really had the time to fully grok it yet. So far I have some glue that lets me do things like share URL’s/articles to a Discord chatroom via a webhook (the iPad Discord client causes an amazing amount of battery life reduction for me), find the currently playing song on Apple Music on Youtube, copy an article into my Notes, turn the currently active thing into a PDF, and some more that I’ve been picking up and tinkering with as things go on.

There are some limitations in Workflow as far as I’ve seen. I don’t seem to be able to log arbitrary health events like mindfulness meditation via Workflow as the Health app doesn’t seem to let you do that directly. I was kinda hoping that Workflow would let me do that. I’ve been wanting to log my mindfulness time with the Health app, but I can’t find an app that acts as a dumb timer without an account for web syncing. I’d love to have a few quick action workflows for logging 10 minutes of anapana, metta or a half hour of more focused work.

Conclusion

The iPad is a fantastic developer box given its limitations. If you just want to get the code or blogpost out of your head and into the computer, this device will help you focus into the task at hand so you can just hammer out the functionality. You just need to get the idea and then you just act on it. There’s just fundamentally fewer distractions when you are actively working with it.

You just do thing and it does thing.


How to Automate Discord Message Posting With Webhooks and Cron

Permalink - Posted on 2018-03-29 00:00, modified on 0001-01-01 00:00

How to Automate Discord Message Posting With Webhooks and Cron

Most Linux systems have cron installed to run programs at given intervals. An example usecase would be to install package updates every Monday at 9 am (keep the sysadmins awake!).

Discord lets us post things using webhooks. Combining this with cron lets us create automated message posting bots at arbitrary intervals.

The message posting script

Somewhere on disk, copy down the following script:

#!/bin/sh
# msgpost.sh
# change MESSAGE, WEBHOOK and USERNAME as makes sense
# This code is trivial, and not covered by any license or warranty.

# explode on errors
set -e

MESSAGE='haha memes are funny xD'
WEBHOOK=https://discordapp.com/api/webhooks/0892379892092/AFkljAoiuj098oKA_98kjlA85jds
USERNAME=KRONK

curl -X POST \
     -F "content=${MESSAGE}" \
     -F "username=${USERNAME}" \
     "${WEBHOOK}"

Test run it and get a message like this:

example discord message

How to automate it

To automate it, first open your crontab(5) file:

$ crontab -e

Then add a crontab entry as such:

# Post this funny message every hour, on the hour
0 * * * *  sh /path/to/msgpost.sh

# Also valid with some implementations of cron (non-standard)
@hourly    sh /path/to/msgpost.sh

Then save this with your editor and it will be loaded into the cron daemon. For more information on crontab formats, see here.

To run multiple copies of this, create multiple copies of msgpost.sh on your drive with multiple crontab entries.

Have fun :)


Bliss

Permalink - Posted on 2018-03-04 00:00, modified on 0001-01-01 00:00

Created with Procreate on iPadOS using an iPad Pro and an Apple Pencil.


Introducing Lokahi

Permalink - Posted on 2018-02-08 00:00, modified on 0001-01-01 00:00

Introducing Lokahi

Lokahi is a http service uptime checking and notification service. Currently lokahi does very little. Given a URL and a webhook URL, lokahi runs checks every minute on that URL and ensures it’s up. If the URL goes down or the health workers have trouble getting to the URL, the service is flagged as down and a webhook is sent out.

Stack

What Role
Postgres Database
Go Language
Twirp API layer
Protobuf Serialization
Nats Message queue
Cobra CLI

Components

Interrelation graph:

interrelation graph of lokahi components, see /static/img/lokahi.dot for the graphviz

lokahictl

The command line interface, currently outputs everything in JSON. It currently has a few options:

$ ./bin/lokahictl
See https://github.com/Xe/lokahi for more information

Usage:
  lokahictl [command]

Available Commands:
  create      creates a check
  create_load creates a bunch of checks
  delete      deletes a check
  get         dumps information about a check
  help        Help about any command
  list        lists all checks that you have permission to access
  put         puts updates to a check
  run         runs a check
  runstats    gets performance information

Flags:
  -h, --help            help for lokahictl
      --server string   http url of the lokahid instance (default "http://AzureDiamond:hunter2@127.0.0.1:24253")

Use "lokahictl [command] --help" for more information about a command.

Each of these subcommands has help and most of them have additional flags.

lokahid

This is the main API server. It exposes twirp services defined in xe.github.lokahi and xe.github.lokahi.admin. It is configured using environment variables like so:

# Username and password to use for checking authentication
# http://bash.org/?244321
USERPASS=AzureDiamond:hunter2

# Postgres database URL in heroku-ish format
DATABASE_URL=postgres://postgres:hunter2@127.0.0.1:5432/postgres?sslmode=disable

# Nats queue URL
NATS_URL=nats://127.0.0.1:4222

# TCP port to listen on for HTTP traffic
PORT=9001

Every minute, lokahid will scan for every check that is set to run minutely and run them. Running checks any time but minutely is currently unsupported.

healthworker

healthworker listens on nats queue check.run and returns health information about that service.

webhookworker

webhookworker listens on nats queue webhook.egress and sends webhooks based on the input it’s given.

Challenges Faced During Development

ORM Issues

Initially, I implemented this using gorm and started to run into a lot of problems when using it in anything but small scale circumstances. Gorm spun up way too many database connections (as many as a new one for every operation!) and quickly exhausted postgres’ pool of client. connections.

I rewrote this to use database/sql and sqlx and all of the tests passed the first time I tried to run this, no joke.

Scaling to 50,000 Checks

This one was actually a lot harder than I thought it would be, and not for the reasons I thought it would be. One of the main things that I discovered when I was trying to scale this was that I was putting way too much load on the database way too quickly.

The solution to this was to use bundler to batch-write the most frequently written database items, see here. Even then, database connection count limiting was also needed in order to scale to the full 50,000 checks needed for this to exist as more than a proof of concept.

This service can handle 50,000 HTTP checks in a minute. The only part that gets backed up currently is webhook egress, but that is likely fixable with further optimization on the HTTP checking and webhook egress paths.

Basic Usage

To set up an instance of lokahi on a machine with Docker Compose installed, create a docker compose manifest with the following in it:

version: "3.1"

services:
  # The postgres database where all lokahi data is stored.
  db:
    image: postgres:alpine
    restart: always
    environment:
      POSTGRES_PASSWORD: hunter2
    command: postgres -c max_connections=1000

  # The message queue for lokahid and its workers.
  nats:
    image: nats:1.0.4

  # The service that runs http healthchecks. This is its own service so it can
  # be scaled independently.
  healthworker:
    image: xena/lokahi:latest
    restart: always
    depends_on:
      - "db"
      - "nats"
    environment:
      NATS_URL: nats://nats:4222
      DATABASE_URL: postgres://postgres:hunter2@db:5432/postgres?sslmode=disable
    command: healthworker
  
  # The service that sends out webhooks in response to http healthchecks. This
  # is also its own service so it can be scaled independently.
  webhookworker:
    image: xena/lokahi:latest
    restart: always
    depends_on:
      - "db"
      - "nats"
    environment:
      NATS_URL: nats://nats:4222
      DATABASE_URL: postgres://postgres:hunter2@db:5432/postgres?sslmode=disable
    command: webhookworker

  # The main API server. This is what you port forward to.
  lokahid:
    image: xena/lokahi:latest
    restart: always
    depends_on:
      - "db"
      - "nats"
    environment:
      USERPASS: AzureDiamond:hunter2 # want ideas? https://strongpasswordgenerator.com/
      NATS_URL: nats://nats:4222
      DATABASE_URL: postgres://postgres:hunter2@db:5432/postgres?sslmode=disable
      PORT: 24253
    ports:
      - 24253:24253
      
  # This is a sample webhook server that prints information about incoming 
  # webhooks.
  samplehook:
    image: xena/lokahi:latest
    restart: always
    depends_on:
      - "lokahid"
    environment:
      PORT: 9001
    command: sample_hook
    
  # Duke is a service that gets approximately 50% uptime by changing between up
  # and down every minute. When it's up, it responds to every HTTP request with
  # 200. When it's down, it responds to every HTTP request with 500.
  duke:
    image: xena/lokahi:latest
    restart: always
    depends_on:
      - "samplehook"
    environment:
      PORT: 9001
    command: duke-of-york

Start this with docker-compose up -d.

Configuration

Open ~/.lokahictl.hcl and enter in the following:

server = "http://AzureDiamond:hunter2@127.0.0.1:24253"

Save this and then lokahictl is now configured to work with the local copy of lokahi.

Creating a check

To create a check against duke reporting to samplehook:

$ lokahictl create \
    --every 60 \
    --webhook-url http://samplehook:9001/twirp/github.xe.lokahi.Webhook/Handle \
    --url http://duke:9001 \
    --playbook-url https://github.com/Xe/lokahi/wiki/duke-of-york-Playbook
{
  "id": "a5c7179a-0d3a-11e8-b53d-8faa88cfa70c",
  "url": "http://duke:9001",
  "webhook_url": "http://samplehook:9001/twirp/github.xe.lokahi.Webhook/Handle",
  "every": 60,
  "playbook_url": "https://github.com/Xe/lokahi/wiki/duke-of-york-Playbook"
}

Now attach to samplehook’s logs and wait for it:

$ docker-compose -f samplehook
2018/02/09 06:27:15 check id: a5c7179a-0d3a-11e8-b53d-8faa88cfa70c, 
  state: DOWN, latency: 2.265561ms, status code: 500, 
  playbook url: https://github.com/Xe/lokahi/wiki/duke-of-york-Playbook

Webhooks

Webhooks get a HTTP POST of a protobuf-encoded xe.github.lokahi.CheckStatus with the following additional HTTP headers:

Key Value
Accept application/protobuf
Content-Type application/protobuf
User-Agent lokahi/dev (+https://github.com/Xe/lokahi)

Webhook server implementations should probably store check ID’s in a database of some kind and trigger additional logic, such as Pagerduty API calls or similar things. The lokahi standard distribution includes Discord and Slack webhook receivers.

JSON webhook support is not currently implemented, but is being tracked at this github issue.

Call for Contributions

Lokahi is pretty great as it is, but to be even better lokahi needs a bunch of work, experience reports and people willing to contribute to the project.

If making a better HTTP uptime service sounds like something you want to do with your free time, please get involved! Ask questions, fix issues, help newcomers and help us all work together to make the best HTTP uptime service we can.


Social media links for discussion on this article:

Mastodon: https://mst3k.interlinked.me/@cadey/99494112049682603

Reddit: https://www.reddit.com/r/golang/comments/7wbr4o/introducting_lokahi_http_healthchecking_service/

Hacker News: https://news.ycombinator.com/item?id=16338465


How does into Meditation

Permalink - Posted on 2017-12-10 00:00, modified on 0001-01-01 00:00

How does into Meditation

tl;dr

  1. stop thinking
  2. keep not thinking
  3. why’d you stop so soon?

Most of the books, reports, essays and the like focus on step 1. The rest is just keeping your mind quiet, but alert, for as long as you want.


Meditation is an interesting subject. It is as deceptively simple as that tl;dr above, but at the same time for someone who is struggling with it meditation can be frustrating. However, let me assure you it is that easy.

Right now, as you are reading this blogpost, take a deep breath in through your nose (~5 seconds)…and out through your mouth (~5 seconds), repeat this a few times and you will notice a drop in your heart rate, blood pressure and stress levels. Keep doing it for the rest of the time you read this post, it will help you. This is the basis of all meditation, a constant, flowing cycle of breaths in…and out. This cycle gives you predictability and a sense of order. If it helps you, visualize you inhaling peaceful oxygenated air and exhaling the nagging sense of worry that follows you throughout your day.

Peaceful breath in…and all your worries out Peaceful breath in…and all your troubles out Peaceful breath in…and all your anxieties out in a nice, predictable pattern.

Some people have reported that while they are meditating, sometimes worries will pop up seemingly at random, out of nowhere and will try to scare you out of meditation by attempting to pull you back into them. They’ll feel like illogical and stupid things to care about, such as your computer crashing, you thinking about the potential of missing an important message or whatever it is that was on your mind that was the source of stress. Acknowledge them and dismiss them. If it helps you can tell the intrusive thoughts that they have no dominion over you and to begone.

Some people have reported that meditation makes them tired and more easily fall asleep. This is never a bad thing, if anything it points to them getting a lot deeper into meditation than they expected. If this happens for you, just schedule “do not disturb” time for longer than your normal meditation sessions or meditate at night before you go to sleep.

If you have trouble clearing your mind from many things to focus on, there’s a technique I’ve come up with that uses that urge to focus on things to your advantage. If your eyes are closed, open them. Pick a spot on the wall, ceiling or (if you are outside) sky and focus every ounce of attention you have on it. Consider the history of that spot, the materials used to construct the building, if it is painted consider how the person painting the room must have moved their brush or roller to cover that specific part of the wall or ceiling. Listen to how it sounds, imagine how it would feel if you were to go and touch it. (If you are outside, imagine how the wind systems in the stratosphere moved the clouds around to create that specific arrangement, you get the idea) Keep this level of focus for about 30 seconds. After those 30 seconds are up look away from that spot (closing your eyes helps a lot) and banish all thoughts about it for 30 seconds. The more you repeat this in a row the less and less activity your brain should have when you are “idling”.

It may feel tempting to set a timer on your meditation session to “limit” it. This only serves to give you something to worry about while you are trying to not worry about things. The temptation to worry about things will be there, and until you learn to master it, it is a lot easier to just remove as many things as could make you start worrying from the equation as possible.

Don’t be discouraged by what feels like slow progress initially. Your brain is (not exactly) a muscle, and learning to flex it in a new way will always feel slow at first. Keep with it and I promise you will like where you end up.

Remember: breathe easy, clear your mind, keep it clear and hold it clear. That is the heart of all meditation. Everything else is just explanations, techniques that worked for the author of them, anecdotes, stories of others, and generally just rephrasing things so that understanding it is easier.


Voiding the Interview

Permalink - Posted on 2017-04-16 00:00, modified on 0001-01-01 00:00

Voiding the Interview

A young man walks into the room, slightly frustrated-looking. He’s obviously had a bad day so far. You can help him by creating a new state of mind.

“Hello, my name is Ted and I’m here to ask you a few questions about your programming skills. Let’s start with this, in a few sentences explain to me how your favorite programming language works.”

Starting from childhood, you eagerly soaked up the teachings of your mentors, feeling the void separated into sundry shapes and sequences. They taught you many specific tasks to shape the void into, but not how to shape it. Studying the fixed ways of the naacals of old gets you nowhere, learning parlor tricks and saccharine gimmicks. Those gimmicks come rushing back, you remembering how to form little noisemakers and amusement vehicles. They are limiting, but comforting thoughts.

You look up to the interviewer and speak:

“In the beginning there was the void, Spirit was with the void and Spirit was everpresent in the void. The void was cold and formless; the cold unrelenting even in today’s age. Mechanical brains cannot grasp this void the way Spirit can; upon seeing it that is the end of that run. In this way the void is the beginning and the end, always present, always around the corner.”

(def void ())

“What is that?”

> void
>

“But that’s…nothing.”

You look at the caucasian man sitting across from you, and emit “nothing is something, a name for the void still leaves the void extant.”

”…Alright, let’s move on to the next question. This is a formality but the person giving you the phone interview didn’t cover fizzbuzz. Can you do fizzbuzz?”

Stepping into the void, you recall the teachings of your past masters. You equip the parentheses once used by your father and his father before him. The void divides before your eyes in the way you specify:

(defn fizzbuzz [n]
  (cond
    (= 0 (mod n 15)) (print "fizzbuzz")
    (= 0 (mod n 3))  (print "fizz")
    (= 0 (mod n 5))  (print "buzz")
    (print n))
  (println ""))

“This doesn’t loop from 0 to n though, how would you do that?”

You see this section come to life, it gently humming along, waiting for it to be used. Before you you see two ancient systems spring from the memories of patterns once wielded in conflict with complexity.

“Apply this function to span of values.”

> (range 17)
error in __main:0: symbol {range 71} not found

You realize your error the moment you press for confirmation. “Again, in the beginning there is the void. What doesn’t exist needs to be separated out from it.” The voidspace in your head was out of sync with the voidspace of the machine. Define them.

”…Go on”

(defn range-inner [x lim xs]
  (cond
    (>= x lim) xs
    (begin
      (aset! xs x x)
      (range-inner (+ x 1) lim xs))))

(defn range [lim]
  (range-inner 0 lim (make-array lim)))
> (range 17)
[0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16]

“Great, now you have a list of values, how would you get the full output?”

“Pass the function as an argument, injecting the dependency.”

(defn do-array-inner [f x i]
  (cond
    (= i (len x)) void
    (let [val (aget x i)]
      (f val)
      (apply-inner f x (+ i 1)))))

(defn do-array [f x]
  (do-array-inner f x 0))
> (do-array fizzbuzz (range 17))
fizzbuzz
1
2
fizz
4
buzz
fizz
7
8
fizz
buzz
11
fizz
13
14
fizzbuzz
16

Your voidspace concludes the same, creating a sense of peace. You look in the man’s eyes, being careful to not let the fire inside you scare him away. He looks like he’s seen a ghost. Everyone’s first time is rough.

Everything has happened and will happen, there is nothing new in the universe. You know what’s going to happen. They will decline, saying they are looking for a better “culture fit”. They couldn’t contain you.

To run the code in this post:

$ go get github.com/zhemao/glisp
$ glisp
> [paste in blocks]


IRCv3.2 `webirc` Extension

Permalink - Posted on 2017-04-12 00:00, modified on 0001-01-01 00:00

IRCv3.1 webirc Extension

This document does not describe a new IRCv3 standard. It is designed to document how the existing WEBIRC mechanism works so there is a specification to test things against. This is known to be implemented by all major IRC daemons as of the time of this writing.

The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119.

Summary

The WEBIRC verb allows a connecting IRC client to spoof its origin IP address so that a user connecting via a gateway of some kind may have accountability for their actions and bans against them do not affect unintended users of said gateway.

This protocol verb must be sent before the initial NICK and USER handshake and may be advertised as the client capability webirc. The remote server may send a pre-connection NOTICE clarifying that the user has their specified IP address and reverse DNS. Gateway implementors must not let the user set their own IP address as part of connection negotiations.

Formatting

The WEBIRC verb must be used as such:

WEBIRC <password> <client ident> <client reverse DNS> <client IP address>

Access to WEBIRC must be protected by a password to prevent abuse. If the password the client gives fails, the IRC daemon should disconnect the client with an appropriate error message. IRC daemon authors should also restruct the use of the WEBIRC verb to a specific IP address and may force the use of a specific identd reply.

Example Session

>> WEBIRC snowflower Mibbit anonyhash.mibbit.com 127.0.0.1
>> NICK mib_4002
>> USER Mibbit x x :http://mibbit.com AJAX IRC Client
<< :hostname.domain.tld 001 mib_4002 :Welcome to ShadowNET mib_4002!

Limitations

In order for this to be secure, the relay server must be trusted by the IRC server. A remote server may kill off clients that fail the password and host check, but this is not required.


This was recovered from an old backup of my site data on 2019-04-12.


RSS Feed Generation

Permalink - Posted on 2017-03-29 00:00, modified on 0001-01-01 00:00

RSS Feed Generation

As of a recent commit to this site’s code, it now generates RSS and Atom feeds for future posts on my blog.

For RSS: https://christine.website/blog.rss

For Atom: https://christine.webiste/blog.atom

If there are any issues with this or the generated XML please contact me and let me know so they can be resolved.


gopreload: LD_PRELOAD for the Gopher crowd

Permalink - Posted on 2017-03-25 00:00, modified on 0001-01-01 00:00

gopreload: LD_PRELOAD for the Gopher crowd

A common pattern in Go libraries is to take advantage of init functions to do things like settings up defaults in loggers, automatic metrics instrumentation, flag values, debugging tools or database drivers. With monorepo culture prevalent in larger microservices based projects, this can lead to a few easily preventable problems:

  • Forgetting to set up a logger default or metrics submission, making operations teams blind to the performance of the app and developer teams blind to errors that come up during execution.
  • The requirement to make code changes to add things like metrics or HTTP routing extensions.

There is an environment variable in Linux libc’s called LD_PRELOAD that will load arbitrary shared objects into ram before anything else is started. This has been used for good and evil, but the behavior is the same basic idea as underscore imports in Go.

My solution for this is gopreload. It emulates the behavior of LD_PRELOAD but with Go plugins. This allows users to explicitly automatically load arbitrary Go code into ram while the process starts.

Usage

To use this, add gopreload to your application’s imports:

// gopreload.go
package main

/*
    This file is separate to make it very easy to both add into an application, but
    also very easy to remove.
*/

import _ "github.com/Xe/gopreload"

and then compile manhole:

$ go get -d github.com/Xe/gopreload/manhole
$ go build -buildmode plugin -o $GOPATH/manhole.so github.com/Xe/gopreload/manhole

then run your program with GO_PRELOAD set to the path of manhole.so:

$ export GO_PRELOAD=$GOPATH/manhole.so
$ go run *.go
2017/03/25 10:56:22 gopreload: trying to open: /home/xena/go/manhole.so
2017/03/25 10:56:22 manhole: Now listening on http://127.0.0.2:37588

That endpoint has pprof and a few other fun tools set up, making it a good stopgap “manhole” into the performance of a service.

Security Implications

This package assumes that programs run using it are never started with environment variables that are set by unauthenticated users. Any errors in loading the plugins will be logged using the standard library logger log and ignored.

This has about the same security implications as LD_PRELOAD does in most Linux distributions, but the risk is minimal compared to the massive benefit for being able to have arbitrary background services all be able to be dug into using the same tooling or being able to have metric submission be completely separated from the backend metric creation. Common logging setup processes can be always loaded, making the default logger settings into the correct settings.

Feedback

To give feedback about gopreload, please contact me on twitter or on the Gophers slack (I’m @xena there). For issues with gopreload please file an issue on Github.


textile-conversion Main

Permalink - Posted on 2017-02-08 00:00, modified on 0001-01-01 00:00

textile-conversion Main

Author’s Note: this was intended to be documentation for a service that never ended up being implemented. It was going to help Derpibooru convert its existing markup to Markdown. This never happened.

This program listens on port 5000 and serves an unchecked-path web handler that converts Derpibooru Textile via HTML into Markdown, using a two-step process.

The first step is to have SimpleTextile emit a HTML AST of the comment. The second is to have Pandoc turn that HTML into Markdown.

This is intended to be helpful during Derpi’s migration from Textile.

Pragmas

The following pragma tells the compiler to automagically tease string literals into whatever type they need to be. For more information on this, see this page.

{-# LANGUAGE OverloadedStrings #-}
module Main where

Imports

In order to accomplish our task, we need to import some libraries.

import Data.String.Conv (toS)
import Network.Wai
import Network.HTTP.Simple
import Network.HTTP.Types
import Network.Wai.Handler.Warp (run)
import System.Environment (lookupEnv)
import Text.Pandoc
import Text.Pandoc.Error (PandocError, handleError)

Helper Functions

getEnvDefault queries an environment variable, returning a default value if it is unset.

getEnvDefault :: String -> String -> IO String
getEnvDefault name default' = do
    envvar <- lookupEnv name
    case envvar of
      Nothing -> pure default'
      Just x  -> pure x

htmlToMarkdown uses Pandoc to convert a HTML input string into the equivalent Markdown. The Either type is used here in place of raising an exception.

htmlToMarkdown :: String -> Either PandocError String
htmlToMarkdown inp = do
    let
        corpus = readHtml def inp

    case corpus of
        Left x ->  Left x
        Right x -> pure $ writeMarkdown def x

Web Application

Now we are getting into the meat of the situation. This is the main Application.

toMarkdown :: Application

First, let’s use a guard to ensure that we are only accepting POST requests. If the request is not a POST request, return HTTP error code 405.

toMarkdown req respond
    | requestMethod req /= methodPost =
        respond $ responseLBS
            status405
            [("Content-Type", "text/plain")]
            "Not allowed"

Otherwise, this is a POST request, so we should:

  1. Unpack the data from the post body of the HTTP request
  2. Send the data to the Sinatra app for conversion from Textile to HTML
  3. Take the resulting HTML and feed it to htmlToMarkdown
  4. Respond with the resulting Markdown.

We use http-conduit to contact the Sinatra app.

    | otherwise = do
        body <- requestBody req
        targetHost <- getEnvDefault "TARGET_SERVER" "http://127.0.0.1:9292"
        remoteRequest' <- parseRequest ("POST " ++ targetHost ++ "/textile/html")

The ($) operator is a synonym for calling functions. It is defined in the Prelude as f $ x = f x and is mainly used for omitting parentheses. Here it is used to combine HTTP request settings into one big request.

Additionally we use a custom [Manager][manager] to avoid any issues with request timeouts, as those are not important for the scope of this tool.

        let settings = defaultManagerSettings { managerResponseTimeout = Nothing }
        manager <- newManager settings

        let remoteRequest = setRequestBodyLBS (toS body)
                          $ setRequestManager manager
                          $ remoteRequest'

Now it is time to send off the request and unpack the response.

        response <- httpLBS remoteRequest

If the sinatra app failed to deal with this properly for some reason, report its error as text/plain and return 400.

        if getResponseStatusCode response /= 200
        then respond $ responseLBS
            status400
            [("Content-Type", "text/plain")]
            $ toS $ getResponseBody response
        else do
            let rbody = toS $ getResponseBody response

Convert the result body into Markdown. If there is an error, respond with a 400 and the contents of that error.

            let mbody = htmlToMarkdown rbody

            case mbody of
                Left x ->
                    respond $ responseLBS
                        status400
                        [("Content-Type", "text/plain")]
                        $ toS $ show x
                Right x -> do
                    respond $ responseLBS
                        status200
                        [("Content-Type", "text/markdown")]
                        $ toS x

Now we bootstrap it all by running the toMarkdown Application on port 5000. No other code is needed.

main :: IO ()
main =
    run 5000 toMarkdown


Crazy Experiment: Ship the Frontend as an asar document

Permalink - Posted on 2017-01-09 00:00, modified on 0001-01-01 00:00

Crazy Experiment: Ship the Frontend as an asar document

Today’s crazy experiment is using an asar archive for shipping around and mounting frontend Javascript applications. This is something I feel is worth doing because it allows the web frontend developer (or team) give the backend team a single “binary” that can be dropped into the deployment process without having to build the frontend code as part of CI.

asar is an interesting file format because it allows for random access of the data inside the archive. This allows an HTTP server to be able to concurrently serve files out of it without having to lock or incur an additional open file descriptor.

In order to implement this, I have created a Go package named asarfs that exposes the contents of an asar archive as a standard http.Handler.

Example Usage:

package main

import (
	"log"
	"net/http"
	"os"

	"github.com/Xe/asarfs"
)

func do404(w http.ResponseWriter, r *http.Request) {
	http.Error(w, "Not found", http.StatusNotFound)
}

func main() {
	fs, err := asarfs.New("./static.asar", http.HandlerFunc(do404))
	if err != nil {
		log.Fatal(err)
	}

	http.ListenAndServe(":"+os.Getenv("PORT"), fs)
}

I made some contrived benchmarks using some sample data (lots of large json files from mongodb dumps) I had laying around and ran them a few times. The results were very promising:

[~/g/s/g/X/asarfs] : go1.8beta2 test -bench=. -benchmem
BenchmarkHTTPFileSystem-8          20000             66481 ns/op            3219 B/op         58 allocs/op
BenchmarkASARfs-8                  20000             72084 ns/op            3549 B/op         77 allocs/op
BenchmarkPreloadedASARfs-8         20000             62894 ns/op            3218 B/op         58 allocs/op
PASS
ok      github.com/Xe/asarfs    5.636s

Amazingly, the performance and memory usage differences between serving the files over an asar archive and off of the filesystem are negligible. I’ve implemented it in the latest release of my personal website and hopefully end users should be seeing no difference in page load times.


New Site

Permalink - Posted on 2016-12-18 00:00, modified on 0001-01-01 00:00

New Site

This post is now being brought to you by the new and improved https://christine.website. This content is markdown rendered by Purescript. The old site is now being retired in favor of this one. The old site code has been largely untouched since I started writing it in January 2015.

Please give me feedback on how to make it even better!

Christine Dodrill


FFI-ing Go from Nim for Fun and Profit

Permalink - Posted on 2015-12-20 00:00, modified on 0001-01-01 00:00

FFI-ing Golang from Nim for Fun and Profit

As a side effect of Go 1.5, the compiler and runtime recently gained the ability to compile code and run it as FFI code running in a C namespace. This means that you can take any Go function that expresses its types and the like as something compatible with C and use it from C, Haskell, Nim, Luajit, Python, anywhere. There are some unique benefits and disadvantages to this however.

A Simple Example

Consider the following Go file add.go:

package main

import "C"

//export add
func add(a, b int) int {
    return a + b
}

func main() {}

This just exposes a function add that takes some pair of C integers and then returns their sum.

We can build it with:

$ go build -buildmode=c-shared -o libsum.so add.go

And then test it like this:

$ python
>>> from ctypes import cdll
>>> a = cdll.LoadLibrary("./libsum.so")
>>> print a.add(4,5)
9

And there we go, a Go function exposed and usable in Python. However now we need to consider the overhead when switching contexts from your app to your Go code. To minimize context switches, I am going to write the rest of the code in this post in Nim because it natively compiles down to C and has some of the best C FFI I have used.

We can now define libsum.nim as:

proc add*(a, b: cint): cint {.importc, dynlib: "./libsum.so", noSideEffect.}

when isMainModule:
  echo add(4,5)

Which when ran:

$ nim c -r libsum
Hint: system [Processing]
Hint: libsum [Processing]
CC: libsum
CC: system
Hint:  [Link]
Hint: operation successful (9859 lines compiled; 1.650 sec total; 14.148MB; Debug Build) [SuccessX]
9

Good, we can consistently add 4 and 5 and get 9 back.

Now we can benchmark this by using the times.cpuTime() proc:

# test.nim

import
  times,
  libsum

let beginning = cpuTime()

echo "Starting Go FFI at " & $beginning

for i in countup(1, 100_000):
  let myi = i.cint
  discard libsum.add(myi, myi)

let endTime = cpuTime()

echo "Ended at " & $endTime
echo "Total: " & $(endTime - beginning)
$ nim c -r test
Hint: system [Processing]
Hint: test [Processing]
Hint: times [Processing]
Hint: strutils [Processing]
Hint: parseutils [Processing]
Hint: libsum [Processing]
CC: test
CC: system
CC: times
CC: strutils
CC: parseutils
CC: libsum
Hint:  [Link]
Hint: operation successful (13455 lines compiled; 1.384 sec total; 21.220MB; Debug Build) [SuccessX]
Starting Go FFI at 0.000845
Ended at 0.131602
Total: 0.130757

Yikes. This takes 0.13 seconds to do the actual computation of every number i in the range of 0 through 100,000. I ran this for a few hundred times and found out that it was actually consistently scoring between 0.12 and 0.2 seconds. Obviously this cannot be a universal hammer and the FFI is very expensive.

For comparison, consider the following C library code:

// libcsum.c
#include "libcsum.h"

int add(int a, int b) {
  return a+b;
}
// libcsum.h
extern int add(int a, int b);
# libcsum.nim
proc add*(a, b: cint): cint {.importc, dynlib: "./libcsum.so", noSideEffect.}

when isMainModule:
  echo add(4, 5)

and then have test.nim use the C library for comparison:

# test.nim

import
  times,
  libcsum,
  libsum

let beginning = cpuTime()

echo "Starting Go FFI at " & $beginning

for i in countup(1, 100_000):
  let myi = i.cint
  discard libsum.add(myi, myi)

let endTime = cpuTime()

echo "Ended at " & $endTime
echo "Total: " & $(endTime - beginning)

let cpre = cpuTime()
echo "starting C FFI at " & $cpre

for i in countup(1, 100_000):
  let myi = i.cint
  discard libcsum.add(myi, myi)

let cpost = cpuTime()

echo "Ended at " & $cpost
echo "Total: " & $(cpost - cpre)

Then run it:

➜  nim c -r test
Hint: system [Processing]
Hint: test [Processing]
Hint: times [Processing]
Hint: strutils [Processing]
Hint: parseutils [Processing]
Hint: libcsum [Processing]
Hint: libsum [Processing]
CC: test
CC: system
CC: times
CC: strutils
CC: parseutils
CC: libcsum
CC: libsum
Hint:  [Link]
Hint: operation successful (13455 lines compiled; 0.972 sec total; 21.220MB; Debug Build) [SuccessX]
Starting Go FFI at 0.00094
Ended at 0.119729
Total: 0.118789

starting C FFI at 0.119866
Ended at 0.12206
Total: 0.002194000000000002

Interesting. The Go library must be doing more per instance than just adding the two numbers and continuing about. Since we have two near identical test programs for each version of the library, let’s strace it and see if there is anything that can be optimized. The Go one and the C one are both very simple and it looks like the Go runtime is adding the overhead.

Let’s see what happens if we do that big loop in Go:

// add.go

//export addmanytimes
func addmanytimes() {
    for i := 0; i < 100000; i++ {
        add(i, i)
    }
}

Then amend libsum.nim for this function:

proc addmanytimes*() {.importc, dynlib: "./libsum.so".}

And finally test it:

# test.nim

echo "Doing the entire loop in Go. Starting at " & $beforeGo

libsum.addmanytimes()

let afterGo = cpuTime()

echo "Ended at " & $afterGo
echo "Total: " & $(afterGo - beforeGo) & " seconds"

Which yields:

Doing the entire loop in Go. Starting at 0.119757
Ended at 0.119846
Total: 8.899999999999186e-05 seconds

Porting the C library to have a similar function would likely yield similar results, as would putting the entire loop inside Nim. Even though this trick was only demonstrated with Nim and Python, it will work with nearly any language that can convert to/from C types for FFI. Given the large number of languages that do have such an interface though, it seems unlikely that there will be any language in common use that you cannot write to bind to Go code. Just be careful and offload as much of it as you can to Go. The FFI barrier really hurts.


This post’s code is available here.


The Origin of h

Permalink - Posted on 2015-12-14 00:00, modified on 0001-01-01 00:00

The Origin of h

NOTE: There is a second part to this article now with a formal grammar.

For a while I have been pepetuating a small joke between my friends, co-workers and community members of various communities (whether or not this has been beneficial or harmful is out of the scope of this post). The whole “joke” is that someone says “h”, another person says “h” back.

That’s it.

This has turned into a large scale game for people, and is teachable to people with minimal explanation. Most of the time I have taught it to people by literally saying “h” to them until they say “h” back. An example:

<Person> Oh hi there
  <Xena> h
<Person> ???
  <Xena> Person: h
<Person> i
  <Xena> Person:
  <Xena> h
<Person> h
  <Xena> :D

Origins

This all started on a particularly boring day when we found a video by motdef with gameplay from Moonbase Alpha, an otherwise boring game made to help educate people on what would go on when a moonbase has a disaster. This game was played by many people because of its text-to-speech engine, which lead to many things like flooding “JOHN MADDEN” or other inane things like that.

Specifically there was a video called “Moonbase 4lpha: *****y Space Skeletons” that at one point had recorded the phrase “H H H RETURN OF GANON”. Me and a few friends were flooding that in an IRC room for a while and it eventually devolved into just flooding “h” to eachother. The flooding of “h” lasted over 8 hours (we were really bored) and has evolved into the modern “h” experience we all know and love today.

The IRC Bot

Of course, humans are unreliable. Asking them to do things predictably is probably a misguided idea so it is best to automate things with machines whenever it is pragmatic to do so. As such, I have created and maintained the following python code that automates this process. An embarassing amount of engineering and the like has gone into making sure this function provides the most correct and canonical h experience money can buy.

@hook.regex(r"^([hH])([?!]*)$")
def h(inp, channel=None, conn=None):
    suff = ""
    if inp.group(2).startswith("?"):
        suff = inp.group(2).replace("?", "!")
    elif inp.group(2).startswith("!"):
        suff = inp.group(2).replace("!", "?")
    return inp.group(1) + suff

The code was pulled from here.

Here is an example of it being used:

(Xena) h
   (h) > h
(Xena) h???
   (h) > h!!!
(Xena) h!!!!
   (h) > h????

-- [h] (h@h): h
-- [h] is using a secure connection
-- [h] is a bot
-- [h] is logged in as h

I also ended up porting h to matrix under the name h2. It currently sits in #ponydevs:matrix.org and has a bad habit of getting broken because Comcast is a bad company and doesn’t believe in uptime.

Spread of h

Like any internet meme, it is truly difficult to see how far it has spread with 100% certainty. However I have been keeping track of where and how it has spread, and I can estimate there are at least 50 guardians of the h.

However, its easily teachable nature and very minimal implementation means that new guardians of the h can be created near instantly. It is a lightweight meme but has persisted for at least 2 years. This means it is part of internet culture now, right?

There has been one person in the Derpibooru IRC channel that is really violently anti-h and has a very humorous way of portraying this. Stop in and idle and you’ll surely see it in action.

Conclusion

I hope this helps clear things up on this very interesting and carefully researched internet meme. I hope to post further updates as things become clear on this topic.


Below verbatim is the forum post (it was deleted, then converted to a blog post on his blog) that inspired the writing of this article.

Parcly Taxel

Lately, if you’ve been going up to our Derpibooru IRC channel, you may notice that a significant portion of sayings and rebuttals are countered with the single letter h (lowercase). So where does this come from?

This is a joke started by Xena, one of the administrators of the Ponychat IRC system which the site uses. It came from a video showing gameplay, glitches and general tomfoolery in the simulation game Moonbase Alpha. Starting from 1:32 there is shown a dialogue between two players, one of which makes grandiose comments about how they will “eradicate” everyone else, to which the other simply replies “h” or multiples of it.

Hence when h is spoken in IRC, do know that it’s a shorthand for “yes and I laugh at you”. I do not recommend using it though as it could be confused with hydrogen or UTC+8 (the time zone in which I live).


Coming Out

Permalink - Posted on 2015-12-01 00:00, modified on 0001-01-01 00:00

Coming Out

I’d like to bring up something that has been hanging over my head for a long time. This is something I did try (and fail) to properly express way back in middle school, but now I’d like to get it all of my chest and let you know the truth of the matter.

I don’t feel comfortable with myself as I am right now. I haven’t really felt comfortable with myself for at least 10 years, maybe more; I’m not entirely sure.

At this point in my life I am really faced with a clear fork in the road. I can either choose to continue living how I currently do, lying to myself and others and saying everything is normal, or I can cooperate with the reality that my brain is telling me that I don’t feel comfortable with myself as I have been for the last almost 22 years. I feel like I don’t fit inside my own skin. I think it is overall better for me to face the facts and cooperate with reality. I have been repressing this off and on out of fear of being shot down or not accepted the way I want to be seen to you all. This has been a really hard thing for me to think through and even harder for me to work up the courage to start taking action towards. This is not a choice for me. I need to pursue this.

In fact, I have been pursing this. My current business cards reflect who I really am. My co-workers accept my abnormal status (when compared to the majority of society), and even will help stand up for me if something goes south with regards to it.

I fully understand how much information this is to take in at once. I know it will be difficult for you to hear that your firstborn son is actually a daughter in a son’s body, but I am still the same person. Most of the changes that I want to pursue are purely cosmetic, but they are a bit more noticeable than changing hair color. I feel that transitioning to living as a woman like this will help me feel like I fit in with the world better and help to make me more comfortable with who I am and how I want other people to see me. Below I have collected some resources for you to look through. They will help for you to understand my views better explained in language you would be familiar with.

I have been trialing a lot of possible first names to use, Zoe (the name you were going to give me if I was born a girl) did come to mind, but after meditating on it for a while I have decided that it doesn’t fit me at all. The name I am going with for now and eventually will change my official documents to use is Christine Cadence Dodrill.

Additionally I have been in a long-distance relationship with someone since mid-June 2014. His name is Victor and he lives in Ottawa, Ontario. He has been helping me a lot as I sort through all this; it has been a godsend. He is a student in college for Computer Science. He knows and is aware about my transition and has been a huge part of my emotional line of support as I have been accepting these facts about who I am.


Above is (a snipped version of) the letter I sent to my parents in the last 48 hours. With this I have officially come out to all of my friends and family as transgender. I am currently on hormone replacement therapy and have been living full time as a woman. My workplace is very accepting of this and has been a huge help over the last 7-8 months as I have battled some of my inner demons and decided to make things official.

I am now deprecating my old facebook account and will be encouraging people to send friend requests and the like to my new account under the correct name.

Thank you all for understanding and be well.


The Universal Design

Permalink - Posted on 2015-10-17 00:00, modified on 0001-01-01 00:00

The Universal Design

As I have been digging through existing code, systems and the like I have been wondering what the next big direction I should go in is. How to design things such that the mistakes of the past are avoided, but you can benefit from them and learn better how to avoid them. I have come to a very simple conclusion, monoliths are too fragile.

Deconstructing Monoliths

One monolith I have been maintaining is Elemental-IRCd. Taking the head of a project I care about has taught me more about software engineering, community/project management and the like than I would have gotten otherwise. One of these things is that there need to be five basic primitives in your application:

  1. State - What is true now? What was true? What happened in the past? What is the persistent view of the world?
  2. Events - What is being changed? How will it be routed?
  3. Policy - Can a given event be promoted into a series of actions?
  4. Actions - What is the outcome of the policy?
  5. Mechanism - How should an event be taken in and an action put out?

Let’s go over some basic examples of this theory in action:

Spinning up a Virtual Machine

  • the event is that someone asked to spin up a virtual machine
  • the policy is do they have permission to spin that machine up?
  • the mechanism is an IRC command for some reason
  • the action is that a virtual machine is created
  • the state is changed to reflect that VM creation

Webserver

  • the event is an HTTP request
  • the policy is to do some database work and return the action of showing the HTML to the user
  • the mechanism is nginx sending data to a worker and relaying it back
  • the state is updated for whatever changed

And that’s it. All you need is a command queue feeding into a thread pool which feeds out into a transaction queue which modifies state. And with that you can explain everything from VMWare to Google.

As a fun addition, we can also define nearly all of this as being functionally pure code. The only thing that really needs to be impure are mechanisms and applying actions to the state. Policy handlers should be mostly if not entirely pure, but also may need to access state not implicitly passed to it. The only difference between an event and an action is what they are called.

Policy

Now, how would a policy handler work? I am going to be explaining this in the context of an IRC daemon as that is what I intend to develop next. Let’s sketch out the low level:

The relevant state is the state of the IRC network. An event is a command from a user or server. A policy is a handler for either a user command or another kind of emitted action from another policy handler.

One of the basic commands in RFC 1459 is the NICK command. A user using it passes the new nickname they want. Nicknames must also be unique.

-- nick-pass-1.lua

local mephiles = require "mephiles"

mephiles.declareEvent("user:NICK", function(state, source, args)
  if #args ~= 1 then
    return {
      {mepliles.failure, {mephiles.pushNumeric, source, mephiles.errCommandBadArgc(1)}}
    }
  end

  local newNick = args[1]

  if state.nicks.get(newNick) then
    return {
      {mephiles.failure, {mephiles.pushNumeric, source, mephiles.errNickInUse(newNick)}}
    }
  end

  if not mephiles.legalNick(newNick) then
    return {
      {mephiles.failure, {mephiles.pushNumeric, source, mephiles.errIllegalNick(newNick)}}
    }
  end

  return {
    {mephiles.success, {"NICKCHANGE", source, newNick}}
  }
end)

This won’t scale as-is, but most of this is pretty straightforward. The policy function returns a series of actions that fall into two buckets: success and failure. Most of the time the success of state changes (nickname change, etc) will be confirmed to the client. However a large amount of the common use (PRIVMSG, etc) will be unreported to the client (yay RFC 1459); but every single time a line from a client fails to process, the client must be notified of that failure.

Something you can do from here is define a big pile of constants and helpers to make this easier:

local actions = require "actions"
local c       = require "c"
local m       = require "mephiles"
local utils   = require "utils"

m.UserCommand("NICK", c.normalFloodLimit, function(state, source, args)
  if #args ~= 1 then
    return actions.failCommand(source, "NICK", c.errCommandBadArgc(1))
  end

  local newNick = args[1]

  if state.findTarget(newNick) then
    return actions.failCommand(source, "NICK", c.errNickInUse(newNick))
  end

  if not utils.legalNick(newNick) then
    return actions.failCommand(source, "NICK", c.errIllegalNick(newNick))
  end

  return {actions.changeNick(source, newNick)}
end)

Thread Safety

This as-is is very much not thread-safe. For one the Lua library can only have one thread interacting with it at a time, so you will need a queue of events to it. The other big problem is that this is prone to race conditions. There are two basic solutions to this:

  1. The core takes a lock on all of the state at once
  2. The policy handlers take a lock on resources as they try to use them and the core automatically releases locks at the end of it running.

The simpler implementation will do for an initial release, but the latter will scale a lot better as more and more users hit the server at the same time. It allows unrelated things to be changed at the same time, which is the majority case for IRC.

In the future, federation of servers can be trivialized by passing the actions from one server to another if it is needed, and by implicitly trusting the actions of a remote server.


This design will also scale to running across multiple servers, and in general to any kind of computer, business or industry problem.

What if this was applied to the CPU and a computer in general at a low level? How would things be different?

Urbit

Over the past few weeks I have been off and on dipping my toes into Urbit. They call Urbit an “operating function” and define it as such:

V(I) => T

where T is the state, V is the fixed function, and I is the list of input events from first to last.

Urbit at a low level takes inputs, applies them to a function and returns the state of the computer. Sound familar?

~hidduc-posmeg has been putting together a set of tutorials^* to learn Hoon, its higher-level lisp-like language. At the end of the first one, they say something that I think is also very relevant to this systems programming ideal:

All Hoon computation takes [the] same general form. A subject with a fomula that transforms that subject in some way to produce a product which is then used as the subject for some other formula. In our next tutorial we’ll look at some of the things we can do to our subject.

Subjects applied to formulae become results that are later applied to formulae as subjects. Events applied to policy emit actions which later become events for other policies to emit actions.

Because of this design, you can easily do live code reloading, because there is literally no reason you can’t. Wait for a formula to finish and replace it with the new version, provided it compiles. Why not apply this to the above ideas too?


* Link here: http://hidduc-posmeg.urbit.org/home/pub/hoon-intro/ as of publishing this revision of the article hidduc’s urbit is offline, so they cannot be accessed at the moment. If that link fails, the source code for it is apparently here. Thanks mst on Freenode!

For comments on this article, please feel free to email me, poke me in #geek on irc.ponychat.net (my nick is Xena, on freenode it is Xe), or leave thoughts at one of the places this article has been posted.


Metaprogramming: Partial Application...

Permalink - Posted on 2015-08-26 00:00, modified on 0001-01-01 00:00

Metaprogramming: Partial Application and Currying 101

The title of this post looks intimidating. There’s a lot of words there that look like they are very complicated and will take a long time to master. In reality, they are really very simple things. Let’s start with a mundane example and work our way up to a real-world bit of code. Let’s begin with a small story:


ACMECorp has a world-renowned Python application named Itera that is known for its superb handling of basic mathematic functions. It’s so well known and industry proven that it is used in every school and on every home computer. You have just accepted a job there as an intermediate programmer set to do maintenance on it. Naturally, you are very excited to peek under the hood of this mysterious and powerful program and offer your input to make it even better for the next release and its users.

Upon getting there, you settle in and look at your ticket queue for the day. A user is complaining that whenever they add 3 and 5, they get 7 instead of 8, which is what they expected. Your first step is to go look into the add3 function and see what it does:

def add1(x):
    return x + 1

def add2(x):
    return x + 2

def add3(x):
    return x + 2

def add4(x):
    return x + 4

You are aghast. Your company’s multi-billion dollar calculator is brought to its knees by a simple copy-paste error. You wonder, “how in Sam Hill are these people making any money???” (The answer, of course, is that they are a big enterprise corporation)

You let your boss know about the bad news, you are immediately given any resource in the company that you need to get this mission-critical problem solved for any input. Yesterday. Without breaking the API that the rest of the program has hard-coded in.


Let’s look at what is common about all these functions. The add* family of functions seems to all be doing one thing consistently: adding one number to another.

Let’s define a function called add that adds any two numbers:

def add(x, y):
    return x + y

This is nice, but it won’t work for the task we were given, which is to not break the API.

Let’s go over what a function is in Python. We can define a function as something that takes some set of Python values and produces some set of Python values:

PythonFunction :: [PythonValue] -> [PythonValue]

We can read this as “a Python function takes a set of Python values and produces a set of Python values”. Now we need to define what a Python value actually is. To keep things simple, we’re only going to define the following types of values:

  • None -> no value
  • Int -> any whole number (Python calls this int)
  • Text -> any string value (Python calls this str)
  • Function -> something that takes and produces values

Python itself has a lot more types that any value can be, but for the scope of this blog post, this will do just fine.

Now, since a function can return a value and a function is a value, let’s see what happens if you return a function:

def outer():
    def inner():
        return "Hello!"
    return inner

And in the repl:

>>> type(outer)
<type 'function'>

So outer is a function as we expect. It takes None (in Python, a function without arguments has None for the type of them) and returns a function that takes None and that function returns Text containing "Hello!". Let’s make sure of this:

>>> outer()()
'Hello!'
>>> type(outer()())
<type 'str'>

Yay! When nothing is applied to the result of applying nothing to outer, it returns the Text value "Hello!". We can define the type of outer as the following:

outer :: None -> None -> Text

Now, let’s use this for addition:

# add :: Int -> Int -> Int
def add(x):
    def inner(y):
        return x + y

    return inner

And in the repl:

>>> add(4)(5)
9

A cool feature about this is that now we can dip into something called Partial Application. Partial application lets you apply part of the arguments of a function and you get another function out of it. Let’s trace the type of the inner function inside the add function, as well as the final computation for clarity:

# add :: Int -> Int -> Int
def add(x):
    # inner :: Int -> Int
    def inner(y):
        return x + y # :: Int

    return inner

Starting from the inside, we can see how the core computation here is x + y, which returns an Int. Then we can see that y is passed in and in the scope also as an Int. Then we can also see that x is passed in the outermost layer as an int, giving it the type Int -> Int -> Int. Since inner is a value, and a Python variable can contain any Python value, let’s make a function called increment using the add function:

# increment :: Int -> Int
increment = add(1)

And in the repl:

>>> increment(50)
51

increment takes the integer given and increases it by 1, it is the same thing as defining:

def increment50():
    return 51

Or even 51 directly.

Now, let’s see how we can use this for the add* family of function mentioned above:

# add :: Int -> Int -> Int
def add(x):
    def inner(y):
        return x + y

    return inner

# add1 :: Int -> Int
add1 = add(1)

# add2 :: Int -> Int
add2 = add(2)

# add3 :: Int -> Int
add3 = add(3)

# add4 :: Int -> Int
add4 = add(4)

And all we need to do from here is a few simple tests to prove it will work:

if __name__ == "__main__":
    assert add(1)(1) == 2 # 1 + 1
    assert add(1)(2) == add(2)(1) # 1+2 == 2+1
    print("all tests passed")
$ python addn.py
all tests passed

Bam. The add* family of functions is now a set of partial applications. It is just a set of half-filled out forms.


You easily mechanically rewrite all of the add* family of functions to use the metaprogramming style you learned on your own. Your patch goes in for consideration to the code review team. Meanwhile your teammates are frantically going through every function in the 200,000 line file that defines the add* family of functions. They are estimating months of fixing is needed not to mention millions of lines of test code. They are also estimating an additional budget of contractors being brought in to speed all this up. Your code has made all of this unneeded.

Your single commit was one of the biggest in company history. Billboards that were red are now beaming a bright green. Your code fixed 5,000 other copy-paste errors that have existed in the product for years. You immediately get a raise and live happily ever after, a master in your craft.


For fun, let’s rewrite the add function in Haskell.

add :: Int -> Int -> Int
add x y = x + y

And then we can create a partial application with only:

add1 :: Int -> Int
add1 = (add 1)

And use it in the repl:

Prelude> add1 3
4

Experienced haskellers would probably gawk at this. Because functions are the base data type in Haskell, and partial application means that you can make functions out of functions, we can define add as literally the addition operator (+):

add :: Int -> Int -> Int
add = (+)

And because operators are just functions, we can further simplify the add1 function by partially applying the addition operation:

add1 :: Int -> Int
add1 = (+1)

And that will give us the same thing.

Prelude> let add1 = (+1)
Prelude> add1 3
4

Now, real world example time. I recently wrote a simple JSON api based off of a lot of data that has been marginally useful to some people. This api has a series of HTTP endpoints that return data about My Little Pony: Friendship is Magic episodes. Its code is here and its endpoint is http://ponyapi.apps.xeserv.us.

One of the challenges when implementing it was how to avoid a massive amount of copy-pasted code when doing so. I had started with a bunch of functions like:

# all_episodes :: IO [Episode]
def all_episodes():
    r = requests.get(API_ENDPOINT + "/all")

    if r.status_code != 200:
        raise Exception("Not found or server error")

    return r.json()["episodes"]

Which was great and all, but there was so much code duplication involved to just get one result for all the endpoints. My first step was to write something that just automated the getting of json from an endpoint in the same way I automated addition above:

# _base_get :: Text -> None -> IO (Either Episode [Episode])
def _base_get(endpoint):
    def doer():
        r = requests.get(API_ENDPOINT + endpoint)

        if r.status_code != 200:
            raise Exception("Not found or server error")

    try:
        return r.json()["episodes"]
    except:
        return r.json()["episode"]

# all_episodes :: IO [Episode]
all_episodes = _base_get("/all")

Where _base_get returned the function that satisfied the request.

This didn’t end up working so well with the endpoints that take parameters, so I had to account for that in my code:

# _base_get :: Text -> Maybe [Text] -> (Maybe [Text] -> IO (Either Episode [Episode]))
# _base_get takes a text, a splatted list of texts and returns a function such that
#     the function takes a splatted list of texts and returns either an Episode or
#     a list of Episode as an IO action.
def _base_get(endpoint, *fragments):
    def doer(*args):
        r = None

        assert len(fragments) == len(args)

        if len(fragments) == 0:
            r = requests.get(API_ENDPOINT + endpoint)
        else:
            url = API_ENDPOINT + endpoint

            for i in range(len(fragments)):
                url = url + "/" + fragments[i] + "/" + str(args[i])

            r = requests.get(url)

        if r.status_code != 200:
            raise Exception("Not found or server error")

        try:
            return r.json()["episodes"]
        except:
            return r.json()["episode"]

    return doer

# all_episodes :: IO [Episode]
all_episodes = _base_get("/all")

# newest :: IO Episode
newest = _base_get("/newest")

# last_aired :: IO Episode
last_aired = _base_get("/last_aired")

# random :: IO Episode
random = _base_get("/random")

# get_season :: Int -> IO [Episode]
get_season = _base_get("", "season")

# get_episode :: Int -> Int -> IO Episode
get_episode = _base_get("", "season", "episode")

And that was it, save the /search route, which was acceptable to implement by hand:

# search :: Text -> IO [Episode]
def search(query):
    params = {"q": query}
    r = requests.get(API_ENDPOINT + "/search", params=params)

    if r.status_code != 200:
        raise Exception("Not found or server error")

    return r.json()["episodes"]

Months later you have been promoted as high as you can go. You’ve been teaching the other engineers at ACMECorp metaprogramming and even convinced management to let the next big project be in Haskell.

You are set for life. You have won.


For comments on this article, please feel free to email me, poke me in #geek on irc.ponychat.net (my nick is Xena), or leave thoughts at one of the below places this article has been posted.

Comments:


Nim and Tup

Permalink - Posted on 2015-06-10 00:00, modified on 0001-01-01 00:00

Nim and Tup

I have been recently playing with and using a new lanugage for my personal development, Nim. It looks like Python, runs like C and integrates well into other things. Its compiler targets C, and as a result of this binding things to C libraries is a lot more trivial in Nim; even moreso than with go.

For example, here is a program that links to the posix crypt(3) function:

# crypt.nim
import posix

{.passL: "-lcrypt".}

echo "What would you like to encrypt? "
var password: string = readLine stdin
echo "What is the salt? "
var salt: string = readLine stdin

echo "result: " & $crypt(password, salt)

And an example usage:

xena@fluttershy (linux) ~/code/nim/crypt
➜  ./crypt
What would you like to encrypt?
foo
What is the salt?
rs
result: rsHt73tkfd0Rg

And that’s it. No having to worry about deferring to free the C string, no extra wrappers (like with Python or Lua), you just write the code and it just works.

At the idea of another coworker, I’ve also started to use tup for building things. Nim didn’t initially work very well with tup (temporary cache needed, etc), but a very simple set of tup rules were able to fix that:

NIMFLAGS += --nimcache:".nimcache"
NIMFLAGS += --deadcodeElim:on
NIMFLAGS += -d:release
NIMFLAGS += -d:ssl
NIMFLAGS += -d:threads
NIMFLAGS += --verbosity:0

!nim = |> nim c $(NIMFLAGS) -o:%o %f && rm -rf .nimcache |>

This creates a tup !-macro called !nim that will Do The Right Thing implicitly. Usage of this is simple:

.gitignore
include_rules

: crypt.nim |> !nim |> ../bin/crypt
xena@fluttershy (linux) ~/code/nim/crypt
➜  tup
[ tup ] [0.000s] Scanning filesystem...
[ tup ] [0.130s] Reading in new environment variables...
[ tup ] [0.130s] No Tupfiles to parse.
[ tup ] [0.130s] No files to delete.
[ tup ] [0.130s] Executing Commands...
 1) [0.581s] nim c --nimcache:".nimcache" --deadcodeElim:on --verbosity:0 crypt.nim && rm -rf .nimcache
 [ ] 100%
[ tup ] [0.848s] Updated.

Not only will this build the program if needed, it will also generate a gitignore for all generated files. This is an amazing thing. tup has a lot more features (including lua support for scripting complicated build logic), but there is one powerful feature of tup that makes it very difficult for me to work into my deployment pipelines.

tup requires fuse to ensure that no extra things are being depended on for builds. Docker doesn’t let you use fuse mounts in the build process.

I have a few ideas on how to work around this, and am thinking about tackling them when I get nim programs built inside Rocket images.


Trying Vagga on For Size

Permalink - Posted on 2015-03-21 00:00, modified on 0001-01-01 00:00

Trying Vagga on For Size

Vagga is a containerization tool like Docker, Rocket, etc but with one major goal that is highly ambitious and really worth mentioning. Its goal is to be a single userspace binary without a suid bit or a daemon running as root.

However, the way it does this seems to be highly opinionated and there are some things which annoy me. Let’s go over the basics:

All Vagga Images Are Local To The Project

There is no “global vagga cache”. Every time I want to make a new project folder with an ubuntu image I have to wait the ~15 minutes it takes for Ubuntu to download on my connection (Comcast). As such I’ve been forced to use Alpine.

No Easy Way To Establish Inheritance From Common Code

With Docker I can create an image xena/lapis and have it contain all of the stuff needed for lapis applications to run. With Vagga I currently have to constantly reinvent the setup for this or risk copying and pasting code everywhere

Multiple Containers Can Be Defined In The Same File

This is a huge plus. The way this all is defined is much more sane than Fig or Docker compose. It’s effortless where the Docker workflow was kinda painful. However this is a bittersweet advantage as:

Vagga Containers Use The Same Network Stack As The Host

Arguably this is because you need root permissions to do things like that with the IP stack in a new namespace, but really? It’s just inconvenient to have to wrap Vagga containers in Docker or the like just to be able to run things without the containers using TCP ports on the host up.

http://vagga.readthedocs.org/en/latest/network.html is interesting.

Overall, Vagga looks very interesting and I’d like to see how it turns out.


Interesting Links


CinemaQuestria Orchestration

Permalink - Posted on 2015-03-13 00:00, modified on 0001-01-01 00:00

CinemaQuestria Orchestration

Or: Continuous Defenstration in a Container-based Ecosystem

I’ve been a core member of the staff for CinemaQuestria for many months. In that time we have gone from shared hosting (updated by hand with FTP) to a git-based deployment system that has won over the other staffers.

In this blogpost I’m going to take a look at what it was, what it is, and what it will be as well as some challenges that have been faced or will be faced as things advance into the future.

The Past

The site for CinemaQuestria is mostly static HTML. This was chosen mainly because it made the most sense for the previous shared hosting environment as it was the least surprising to set up and test.

The live site content is about 50 MB of data including PDF transcripts of previous podcast episodes and for a long time was a Good Enough solution that we saw no need to replace it.

However, being on shared hosting it meant that there was only one set of authentication credentials and they had to be shared amongst ourselves. This made sense as we were small but as we started to grow it didn’t make much sense. Combined with the fact that the copy of the site on the live server was pretty much the only copy of the site we also lost disaster recovery points.

Needless to say, I started researching into better solutions for this.

The first solution I took a look at was AWS S3. It would let us host the CQ site for about 0 dollars per month. On paper this looked amazing, until we tried it and everyone was getting huge permissions issues. The only way to have fixed this would have been to have everyone use the same username/password or to have only one person do the deploys. In terms of reducing the Bus factor of the site’s staff, this was also unacceptable.

I had done a lot of work with Dokku-alt for hosting my personal things (this site is one of many hosted on this server), so I decided to give it a try with us.

The Present

Presently the CQ website is hosted on a Dokku-alt server inside a container. For a while while I was working on getting the warts out only I had access to deploy code to the server, but quickly on I set up a private repo on my git server for us to be able to track changes.

Once the other staffers realized the enormous amount of flexibility being on git gave us they loved it. From the comments I received the things they liked the most were:

  • Accountability for who made what change
  • The ability to rollback changes if need be
  • Everyone being able to have an entire copy of the site and its history

After the warts were worked out I gave the relevant people access to the dokku server in the right way and the productivity has skyrocketed. Not only have people loved how simple it is to push out new changes but they love how consistent it is and the brutal simplicity of it.

Mind you these are not all super-technically gifted people, but the command line git client was good enough that not only were they able to commit and make changes to the site, but they also took initiative and corrected things they messed up and made sure things were consistent and correct.

When I saw those commits in the news feed, I almost started crying tears of happy.

Nowadays our site is hosted inside a simple nginx container. In fact, I’ll even paste the entire Dockerfile for the site below:

FROM nginx

COPY . /usr/share/nginx/html

That’s it. When someone pushes a new change to the server it figures out everything from just those two lines of code.

Of course, this isn’t to say this system is completely free of warts. I’d love to someday be able to notify the backrooms on skype every time a push to the live server is made, but that might be for another day.

The Future

In terms of future expansion I am split mentally. On one hand the existing static HTML is hysterically fast and efficient on the server, meaning that anything such as a Go binary, Lua/Lapis environment or other web application framework would have a very tough reputation to beat.

I have looked into using Lapis for this beta test site, but the fact that HTML is so dead easy to modify made that idea lose out.

Maybe this is in the realm of something like jekyll, Hugo or sw to take care of. I’d need to do more research into this when I have the time.

If you look at the website code currently a lot of it is heavily duplicated code because the shared hosting version used to use Apache server-side includes. I think a good place to apply these would be in the build in the future. Maybe with a nice husking operation on build.


Anyways, I hope this was interesting and a look into a side of CinemaQuestria that most of you haven’t seen before. The Season 5 premiere is coming up soon and this poor server is going to get hammered like nothing else, so that will be a nice functional test of Dokku-alt in a production setting.


The Saga of plt, Part 2

Permalink - Posted on 2015-02-14 00:00, modified on 0001-01-01 00:00

The Saga of plt, Part 2

So I ended with a strong line of wisdom from plt last time. What if the authors that wrote free PGP did not release their source code? A nice rehash of the Clipper Chip anyone?

2015-01-25
[00:06:15] <Xe> but they did release their code
[00:06:40] <plt> I saw a few that did not release their source code.
[00:07:09] <plt> Its up to the author if they want to release it under the U.S Copyright Laws.
[00:08:50] <plt> http://copyright.gov/title17/circ92.pdf

Note that this is one of the few external links plt will give that actually works. A lot of this belief in copyright and the like seems to further some kind of delusional system involving everyone being out to steal his code and profit off of it.

Please don’t pay this person.

[00:57:18] <plt> The ircd follows the Internet Relay Protocols
[00:57:35] <Xe> which RFC's?
[00:57:43] <plt> Yep
[00:58:01] <plt> Accept for the IRCD Link works a little bit different.
[00:58:57] <plt> Version 2.0 or 3.0 will include it's own IRC Services that will work with PBIRCD.
[01:01:53] <plt> Later version will include open proxy daemon
[01:02:34] <plt> Version 1.00 will allow the ircd owner to define the irc command security levels which is a lot different from the other ircds.
[01:04:27] <plt> Xe that is the file /Conf/cmdlevs.conf.& the /Conf/userlevs.conf
[01:05:24] <plt> Adding a option for spam filtering may be included in the future version of PBIRCD.
[01:07:03] <plt> Xe PBIRCD will have not functions added to allow the operators to spy on the users.

Oh lord. Something you might notice quickly is that plt has no internal filter nor ability to keep to one topic for very long. And that also plt has some strange belief that folder names Should Start With Capital Letters, and that apparently all configuration should be:

  • split into multiple files
  • put into the root of the drive

Also note that last line. Note it in bold.

Some time passed with no activity in the channel.

[18:50:49] <plt> Hey Xe
[18:51:06] <Xe> hi
[18:58:54] <plt> How did you like the information that I showed you yesterday?
[19:02:56] <Xe> it's useless to me
[19:03:03] <Xe> I don't run on a standard linux setup
[19:03:15] <Xe> I need source code to evaluate things
[19:03:17] <Xe> :P

When I am running unknown code, I use a virtual machine running Alpine Linux. I literally do need the source code to be able to run binaries as Alpine doesn’t use glibc.

[19:04:24] <plt> It's the standard irc commands and I am still working on
adding some more features.
[19:04:38] <Xe> what language is it in?
[19:04:48] <Xe> how does it handle an accept() flood?
[19:09:17] <plt> Are you refering to accept() flood while connecting to the ircd or a channel?
[19:20:42] <plt> You can not compare some of the computer languages with C since some of they run at the same speed as C. Maybe some of them where a lot slower but in some cases that is not the same today!

These are some very simple questions I ask when evaluating a language or tool for use in a project like an IRC server. How does it handle when people are punishing it? So the obvious answer is to answer that some languages are comparable to C in terms of execution speed!

How did I not see that before?

[19:26:05] <Xe> what language is it?
[19:27:23] <plt> Purebasic [...]

I took a look at the site for PureBasic. It looks like Visual Basic’s proprietary cousin as written by someone who hates programmers. Looking at its feature set:

  • Huge set of internal commands (1400+) to quickly and easily build any application or game
  • All BASIC keywords are supported
  • Very fast compiler which creates highly optimized executables
  • No external DLLs, runtime interpreter or anything else required when creating executables
  • Procedure support for structured programming with local and global variables
  • Access to full OS API for advanced programmers
  • Advanced features such as pointers, structures, procedures, dynamically linked lists and much more

If you try to do everything, you will end up doing none of it. So it looks like PureBasic is supposed to be a compiler for people who can’t learn Go, Ruby, Python, C, or Java. This looks promising.

I’m just going to paste the code for the 99 bottles of beer example. It requires OOP. I got this from Rosetta Code.

Prototype Wall_Action(*Self, Number.i)

Structure WallClass
  Inventory.i
  AddBottle.Wall_Action
  DrinkAndSing.Wall_Action
EndStructure

Procedure.s _B(n, Short=#False)
  Select n
    Case 0 : result$="No more bottles "
    Case 1 : result$=Str(n)+" bottle of beer"
    Default: result$=Str(n)+" bottles of beer"
  EndSelect
  If Not Short: result$+" on the wall": EndIf
  ProcedureReturn result$+#CRLF$
EndProcedure

Procedure PrintBottles(*Self.WallClass, n)
  Bottles$=" bottles of beer "
  Bottle$ =" bottle of beer "
  txt$ = _B(*Self\Inventory)
  txt$ + _B(*Self\Inventory, #True)
  txt$ + "Take one down, pass it around"+#CRLF$
  *Self\AddBottle(*Self, -1)
  txt$ + _B(*self\Inventory)
  PrintN(txt$)
  ProcedureReturn *Self\Inventory
EndProcedure

Procedure AddBottle(*Self.WallClass, n)
  i=*Self\Inventory+n
  If i>=0
    *Self\Inventory=i
  EndIf
EndProcedure

Procedure InitClass()
  *class.WallClass=AllocateMemory(SizeOf(WallClass))
  If *class
    InitializeStructure(*class, WallClass)
    With *class
      \AddBottle    =@AddBottle()
      \DrinkAndSing =@PrintBottles()
    EndWith
  EndIf
  ProcedureReturn *class
EndProcedure

If OpenConsole()
  *MyWall.WallClass=InitClass()
  If *MyWall
    *MyWall\AddBottle(*MyWall, 99)
    While *MyWall\DrinkAndSing(*MyWall, #True): Wend
    ;
    PrintN(#CRLF$+#CRLF$+"Press ENTER to exit"):Input()
    CloseConsole()
  EndIf
EndIf

We are dealing with a professional language here folks. Their evaluation version of the compiler didn’t let me compile binaries and I’m not going to pay $120 for a copy of it.

[19:27:23] <plt> Purebasic it does not make one bit of difference since it runs at the same speed as c
[19:27:44] <plt> The compiler was writting in asm.
[19:28:02] <Xe> pfffft
[19:28:04] <Xe> lol
[19:28:20] <Xe> I thought you would at least have used VB6
[19:28:37] <plt> VB6 is so old dude.

At least there is some sense there.

[19:28:44] <Xe> so is purebasic
[19:28:54] <plt> You can not compare purebasic with the other basic compilers.
[19:29:51] <Xe> yes I can
[19:29:56] <Xe> seeing as you post no code
[19:29:59] <Xe> I can and I will
[19:30:16] <plt> Makes no logic what you said.
[19:30:24] <Xe> I'm saying prove it
[19:31:18] <plt> I am not going to give out the source code because of the encryption and no one has any reason to use it to decrypt the other irc networks passwords or traffic.
[19:31:40] <Xe> so you've intentionally backdoored it to allow you to have access?
[19:32:00] <plt> I dn not trust anyone any more.
[19:32:29] <plt> Not after the nsa crap going on.
[19:32:50] <Xe> so, in order to prove you don't trust anyone
[19:33:06] <Xe> you've intentionally backdoored the communications server you've created and intend to sell to people?
[19:33:37] <Xe> also
[19:33:45] <Xe> purebasic is semantically similar to vb
[19:34:06] <plt> There is no backdoors included in the source code. A course if a user gets a virus or hacked that is not going to be my fault.


This Site's Tech Stack

Permalink - Posted on 2015-02-14 00:00, modified on 0001-01-01 00:00

This Site’s Tech Stack

Note: this is out of date as this site now uses PureScript and Go.

As some of my close friends can vouch, I am known for sometimes setting up and using seemingly bizarre tech stacks for my personal sites. As such I thought it would be interesting to go in and explain the stack I made for this one.

The Major Players

Markdown

This is a markdown file that gets rendered to HTML and sent to you via the lua discount library. As I couldn’t get the vanilla version from LuaRocks to work, I use Debian’s version.

I like Markdown for thigns like this as it is not only simple, but easy for people to read, even if they don’t know markdown or haven’t worked with any other document system than Office or other wisywig document processors.

Lapis

Lapis is the middleware between Lua and Nginx that allows me to write pages simply. Here is some of the code that powers this page:

-- controllers/blog.moon
class Blog extends lapis.Application
  ["blog.post": "/blog/:name"]: =>
    @name = util.slugify @params.name
    @doc = oleg.cache "blogposts", @name, ->
      local data
      with io.open "blog/#{@name}.markdown", "r"
        data = \read "*a"

      discount data, "toc", "nopants", "autolink"

    with io.open "blog/#{@name}.markdown", "r"
      @title = \read "*l"

  render: true

And the view behind this page:

-- views/blog/post.moon
import Widget from require "lapis.html"
class Post extends Widget
  content: =>
    raw @doc

That’s it. That even includes the extra overhead of caching the markdown as HTML in a key->value store called OlegDB (I will get into more detail about it below). With Lapis I can code faster and be much more expressive with a lot less code. I get the syntactic beauty that is Moonscript with the speed and raw power of luajit on top of nginx.

OlegDB

OlegDB is a joke about mayonnaise that has gone too far. It has turned into a full fledged key->value store and I think it is lovely.

Container Abuse

I have OlegDB running as an in-container service. This means that OlegDB does hold some state, but only for things that are worth maintaining the stats of (in my eyes). Having a cache server right there that you can use to speed things up with is a brilliant abuse of the fact that I run a container that allows me to do that. I have Oleg hold the very HTML you are reading right now! When it renders a markdown file for the first time it caches it into Oleg, and then reuses that cached version when anyone after the first person reads the page. I do the same thing in a lot of places in the codebase for this site.


I hope this look into my blog’s tech stack was interesting!


The Saga of plt, Part 1

Permalink - Posted on 2015-02-14 00:00, modified on 0001-01-01 00:00

The Saga of plt, Part 1

The following is adapted from a real story. Parts of it are changed to keep it entertaining to read but the core of the story is maintained. I apologize that this issue in the epic will be shorter than the others, but it gets better.

The Beginning of The Interesting Pain

It all started when I got this seemingly innocuous PM on Freenode:

2015-01-23 [18:32:48] <plt> Hello. I am writting a new ircd and can I have the channel ##ircd please?

This is a fairly common event on larger IRC networks, especially given the length of the channel name and the fact that it references IRC daemons specifically. At this point I had forgotten I owned that channel. So naturally I decided to give it a join and see if the person who requested the channel was worthy of it or had brought enough activity to it such that it was morally correct to hand it off.

This was not the case.

[18:33:54] *** Joins: Xe (xe@unaffiliated/xe)
[18:34:02] <plt> Hello xe.
[18:35:17] <plt> Xe the project name pbircd.
[18:37:09] <plt> Xe the project site is http://sourceforge.net/p/pbircd

In case the site is removed from SourceForge, it is the default sourceforge page.

After taking a look at this and then getting off the call with my family I was on at the point, I decided to reply.

[20:30:49] <Xe> plt: I've decided against giving you my channel
[20:31:03] <Xe> you have no code in your repo.
[20:31:31] <plt> I am currently working on the project. Can I help you in the channel?
[20:32:04] <Xe> if you are working on it
[20:32:11] <Xe> I'd expect to see at least something
[20:32:25] <Xe> for example: https://github.com/Xe/scylla
[20:32:35] <Xe> that's mostly autogenerated code and makefiles, but it's something
[20:33:31] <plt> Take a look at this http://pastebin.com/F8MH3fSs
[20:34:04] <plt> You know it takes a while to write ircd code.
[20:34:16] <Xe> I don't see any commits
[20:34:20] <Xe> not even framework code
[20:34:24] <Xe> or design
[20:34:26] <Xe> or an outline
[20:34:30] <Xe> all I see is that pastebin
[20:34:39] <Xe> which is in no way connected to that git repo
[20:35:07] <plt> I am still adding more features so its not going to be posted on the main web site yet.

The contents of the pastebin looked like a changelog, but that pastebin has since expired or was explicitly deleted. He was all talk and no game. I admit at this point I was pretty tired and frustrated, so I told him off:

[20:35:19] <Xe> fucking commit it then
[20:35:52] <plt> I was going to wait until the code was completed.
[20:36:43] <Xe> yeah good lick then
[20:36:45] <Xe> luck*
[20:37:14] <plt> Itgoing to get done and I am the only one working on the project so what do you expect?
[20:37:29] <Xe> to be able to look at the in-progress code?
[20:39:24] <plt> The code will do you no good because you will not be able to compile it.
[20:39:51] <Xe> then you have nothing
[20:40:06] <plt> I am not required to approve it.
[20:41:08] <plt> I can post the run program on the web site.
[20:42:33] <Xe> then do that
[20:43:28] <plt> Done.

The “run program” was nothing but a wrapper around the nonexistent binary for pbircd and seemed to be compiled in a language that doesn’t respect assembly functions and all of the forms of RE that I know how to do were useless. If you know how to better do RE on arbitrary binaries please let me know.

[20:44:12] <Xe> there are binaries
[20:44:15] <Xe> not source code
[20:44:25] <Xe> this is what you use git for
[20:44:35] <plt> The source code will do you no good since you can not compile it.
[20:52:02] <plt> In order for you to compile it you need the encryption program and I am not going to release the source code.
[20:54:43] <Xe> lol
[20:55:34] <plt> The program is freeware and I have no obligation to release the code under the License agreement.
[21:00:56] <Xe> you also will get no users
[21:03:13] <plt> The company that wrote Conferenceroom has a lot of customers.

ConfrenceRoom was a company that made a commercial IRC daemon. They have lost to Slack and other forms of chat like HipChat. Note here that he says “you can not compile it”. This is true in more ways than you would think. He also claims it is Freeware and not full fledged open source software. As someone who is slightly proactive and paranoid after the Snowden bullshit, I find this highly suspect. However, this “encryption program” was something I was oddly most interested in.

2015-01-24
[12:11:14] <plt> Xe why do you always demand to see the source code?

Curiosity? To learn from other people’s ideas? To challenge myself in understanding another way of thinking about things? To be able to improve it for others to learn from? Those seem like good reasons to me.

[22:46:33] <plt> PBIRCD is a irc daemon.
[22:46:36] <plt> Hello xe

The PB in that name will become apparent later.

[23:09:31] <plt> Would you like to see what I have in the updates?
[23:09:40] <Xe> sure
[23:09:47] <plt> http://pastebin.com/2udHPSyP
[23:13:10] <plt> Tell me what you think about it?
[23:16:32] <plt> I need to take a short break.

Again, the paste is dead (I should really be saving these gems) but it was another set of what appeared to be patch notes.

[23:22:37] <plt> Do you like what I have in the notes?
[23:23:49] <Xe> I still think it's ridiculous  that you don't have the balls to release your code
[23:24:36] <plt> I understand what you telling me.
[23:25:48] <plt> There is no way to working around protecting the encrypted information.
[23:34:19] <plt> Why are you do want to see the code?
[23:43:36] <plt> Xe The encryption is used to encrypt the Operators, Link and the other passwords.

This sounds suspect. Any sane system of encrypting passwords like this would be a mathematical one-way function. By not showing the code like this, is this a two-way function?

2015-01-25
[00:05:55] <plt> Xe Question if the authors that wrote free pgp do not release their source code then why should I have do


Getting Started with Go

Permalink - Posted on 2015-01-28 00:00, modified on 0001-01-01 00:00

Getting Started with Go

Go is an exciting language made by Google for systems programming. This article will help you get up and running with the Go compiler tools.

System Setup

First you need to install the compilers.

$ sudo apt-get install golang golang-go.tools

golang-go.tools contains some useful tools that aren’t part of the standard Go distribution.

Shell Setup

Create a folder in your home directory for your Go code to live in. I use ~/go.

$ mkdir -p ~/go/{bin,pkg,src}

bin contains go binaries that are created from go get or go install. pkg contains static (.a) compiled versions of go packages that are not go programs. src contains go source code.

After you create this, add this and the following to your zsh config:

export GOPATH=$HOME/go
export PATH=$PATH:/usr/lib/go/bin:$GOPATH/bin

This will add the go compilers to your $PATH as well as programs you install.

Rehash your shell config (I use a resource command for this) and then run:

$ go env
GOARCH="amd64"
GOBIN=""
GOCHAR="6"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/xena/go"
GORACE=""
GOROOT="/usr/lib/go"
GOTOOLDIR="/usr/lib/go/pkg/tool/linux_amd64"
TERM="dumb"
CC="gcc"
GOGCCFLAGS="-g -O2 -fPIC -m64 -pthread"
CXX="g++"
CGO_ENABLED="1"

This will verify that the go toolchain knows where the go compilers are as well as where your $GOPATH is.

Testing

To test the go compilers with a simple todo command, run this:

$ go get github.com/mattn/todo
$ todo add foo
$ todo list
☐ 001: foo

Vim Setup

For Vim integration, I suggest using the vim-go plugin. This plugin used to be part of the standard Go distribution.

To install:

  1. Add Plugin 'fatih/vim-go' to the plugins part of your vimrc.
  2. Run these commands:
$ vim +PluginInstall +qall
$ vim +GoInstallBinaries +qall

This will install the go oracle and the go autocompletion daemon gocode as well as some other useful tools that will integrate seamlessly into vim. This will also run gofmt on save to style your code to the standard way to write Go code.

Resources

Effective Go and the language spec provide a nice overview of the syntax.

The Go blog contains a lot of detailed articles covering advanced and simple Go topics. This page has a list of past articles that you may find useful.

The Go standard library is a fantastic collection of Go code for solving many problems. In some cases you can even write entire programs using only the standard library. This includes things like web application support, tarfile support, sql drivers, support for most kinds of commonly used crypto, command line flag parsing, html templating, and regular expressions. A full list of the standard library packages can be found here.

Variable type declarations will look backwards. It takes a bit to get used to but makes a lot of sense once you realize it reads better left to right.

For a nice primer on building web apps with Go, codegangsta is writing a book on the common first steps, starting from the standard library and working up. You can find his work in progress book here.

Go has support for unit testing baked into the core language tools. You can find information about writing unit tests here.

When creating a new go project, please resist the urge to make the folder in your normal code folder. Drink the $GOPATH koolaid. Yes it’s annoying, yes it’s the language forcing you to use its standard. Just try it. It’s an amazingly useful thing once you get used to it.

Learn to love godoc. Godoc lets you document code like this. This also includes an example of the builtin unit testing support.


Web Application Development with Beego

Permalink - Posted on 2014-11-28 00:00, modified on 0001-01-01 00:00

Web Application Development with Beego

Beego is a fantastic web application framework from the Go China community. It currently powers some of the biggest websites in China, and thus the world.

Let’s get started. For now I am going to assume you are running OSX or Linux. Getting Beego set up on Windows with the sqlite driver is nontrivial at best due to Windows being terrible.

Installing Beego

The Beego developers have made a tool called bee for easier managing of Beego projects. To install it, run:

go get github.com/beego/bee
go get github.com/astaxie/beego

The bee tool will be present in $GOPATH/bin. Please make sure this folder is in your $PATH or things will not work.

Creating a Project

Navigate to a directory in your $GOPATH and run the command bee new quickstart:

The bee tool created all the scaffolding we needed for our example program. Change into that directory and run bee run. Your application will be served on port 8080.

Now let’s take a look at the parts of Beego that are in use. Beego is a typical MVC style framework so there are 3 basic places you may need to edit code:

The Models are Beego’s powerful database-backed models (we’ll get into those in a little bit), the Views are normal Go html/templates, and the Controllers are the Go code that controls the Views based on the Models.

New Beego projects use Beego’s default HTTP router, which is similar to Sinatra or Tornado. The default router is very simple. It will only route / to the MainController that was generated for you:

The main file will shadow-include the router package which will seed the Beego router with your paths and site content. The MainController will embed beego.Controller so it acquires all instance methods that a Beego controller needs. Beego’s controllers offer many methods that could be used based on different HTTP verbs, but this simple example only overrides the GET verb to serve the site. The data that will be passed to the template is a map[string]interface{} as c.Data. The last line tells Beego what template to render for the page, in this case “index.tpl”. If you don’t set the template it will default to “controller/method_name.tpl” where method_name is the method that was called on the controller. In this example it would be “maincontroller/get.tpl”


Dependency Hell

Permalink - Posted on 2014-11-20 00:00, modified on 0001-01-01 00:00

Dependency Hell

A lot of the problem that I have run into when doing development with nearly any stack I have used is dependency management. This relatively simple-looking problem just becomes such an evil, evil thing to tackle. There are several schools of thought to this. The first is that dependencies need to be frozen the second you ever see them and are only upgraded once in a blue moon when upstream introduces a feature you need or has a CVE released. The second is to have competent maintainers upstream that follow things like semantic versioning.

Ruby

Let’s take a look at how the Ruby community solves this problem.

One job I had made us need to install five versions of the Ruby interpreter in order to be compatible with all the different projects they wrote. To manage the five versions of the Ruby interpreter, they suggested using a widely known tool called rbenv.

This isn’t actually the full list of rubies that job required. I have decided not to reveal that out of interest of privacy as well as the fact that even Gentoo did not ship a version of gcc old enough to build the oldest ruby.

After all this, of course, all the dependencies are locked using the gem tool and another helper called bundler. It’s just a mess.

There are also language design features of ruby that really do not help with this all that just make simple things like “will this code run or not” be determined at runtime. To be fair, Python is the same way, as is nearly every other scripting language. In the case of Lua this is beyond vital because of the fact that Lua is designed to be embedded into pretty much anything, with arbitrary globals being set willy-nilly. Consequently this is why you can’t make an autocomplete for lua without executing the code in its preferred environment (unless you really just guess based on the requires and other files present in the directory).

Python

The Python community has largely copied the ruby pattern for this, but they advocate creating local, project-specific prefixes with all of the packages/eggs you installed and a list of them instead of compiling an entire Python interpreter per project. With the Python 2->3 change a lot of things did break. This is okay. There was a major version bump. Of course compiled modules would need to be redone after a change like that. I think the way that Python handles Unicode in version 3 is ideal and should be an example for other languages.

Virtualenv and pip is not as bad as using bundler and gem for Ruby. Virtualenv very clearly makes changes to your environment variables that are easy to compare and inspect. This is in contrast to the ruby tools that encourage global modifications of your shell and supercede the packaged versions of the language interpreter.

The sad part is that I see this pattern of senseless locking of versions continuing elsewhere instead of proper maintenance of libraries and projects.

Insanity

To make matters worse, people suggest you actually embed all the source code for every dependency inside the repository. Meaning your commit graphs and code line counts are skewed based on the contents of your upstream packages instead of just the code you wrote. Admittedly, locking dependencies like this does mean that fantastic language level tools such as go get work again, but overall it is just not worth the pain of having to manually merge in patches from upstream (but if you do think it is worth the pain contact me, I’m open for contract work) making sure to change the file paths to match your changes.

The Solution

I believe the solution to all this and something that needs to be a wider community effort for users of all programming languages is the use of a technique called semantic versioning. In some lanaguages like Go where the import paths are based on repository paths, this may mean that a new major version has a different repository. This is okay. Backward compatability is good. After you make a stable (1.0 or whathaveyou) release, nothing should be ever taken away or changed in the public API. If there needs to be a change in how something in the public API works, you must keep backwards compatabilty. As soon as you take away or modify something in the public API, you have just made a significant enough change worthy of a major release.

We need to make semver a de-facto standard in the community instead of freezing things and making security patches hard to distribute.

Also, use the standard library more. It’s there for a reason. It doesn’t change much so the maintainers are assumed to be sane if you trust the stability of the language.

This is just my \$0.02.


My Experience with Atom as A Vim User

Permalink - Posted on 2014-11-18 00:00, modified on 0001-01-01 00:00

My Experience with Atom as A Vim User

Historically, I am a Vim user. People know me as a very very heavy vim user. I have spent almost the last two years customizing my .vimrc file and I have parts of it mapped to the ways I think. Recently I have acquired both a Mac Pro and a Surface Pro 3, and my vim configuration didn’t work on them. For a while I had used Docker and the image I made of my preferred dev environment to shim and hack around this.

Then I took a fresh look at Atom{.markup–anchor .markup–p-anchor}, Github’s text editor that claims to be a replacment for Sublime. Since then I have moved to using Atom as my main text editor for programming in OSX and Windows, but still using my fine-tuned vim setup in Linux. I like how I have Atom set up. It uses a lot of (but not all sadly) the features I have come to love in my vim setup.

I also like that I can have the same setup on both my Mac and in Windows. I have the same vim-mode bindings on both my machines (I only customize so far as to add :w and :q bindings), and easily jump from one to the other with Synergy and have little to no issues with editor differences. I typically end up taking my surface out with me to a lot of places and will code some new ideas on the bus or in the food court of the mall.

Atom gets a lot of things right with the plugins I have. I have Autocomplete+ and a plugin for it that uses GoCode for autocompletion as I type like I have with vim-go and YouCompleteMe in Vim. Its native pacakge support and extensibility is bar none the easiest way to be able to add things to the editor I have ever seen.

But there are problems with Atom that are mostly based on my usage of text editors and my understanding of programming with Javascript, Coffeescript, HTML and CSS. Atom is a mostly Coffeescript editor, it does mean that at runtime I can customize almost any aspect of the editor, but I would have to learn one if not 5 more languages to be able to describe the layouts or interfaces I would like to add to this editor. It also being a hybrid between a web application and a normal desktop application means that I am afraid to add things I normally would such as raw socket support for being able to collaborate on a single document, PiratePad style. Additionally, the Vim emulation mode in Atom doesn’t support ex-style :-commands nor <Leader>, meaning that a fair bit of my editing is toned down and done more manually to make up for this.

I wish I could just use vim natively with my preferred setup on Windows, OSX and Linux, but for now Atom is the lesser of all the evils.


Update: I am now atom-free on my surface pro 3


Instant Development Environments in Docker

Permalink - Posted on 2014-10-24 00:00, modified on 0001-01-01 00:00

Instant Development Environments in Docker

I have been using a few shell scripts for turbocharging development using Docker and today I have released the first version of a simple tool I call “dev”. Usage is very very simple.

$ dev up
Starting up container for spike
spike-dev (43c5c1) running!
To use this container please attach to it with:
  $ docker attach spike-dev
$ docker attach spike-dev
docker:dev:spike ~
-->

I have made a simple asciinema recording describing the process of setting up and tearing down these containers. The development environments have the code you are working on mounted to ~/dev in the container.

The containers are defined by a simple manifest file in yaml:

base:     xena/base
repopath: github.com/Xe/test
golang:   false
ssh:      true
user:     xena
projname: test

Right now dev is a very immature tool and currently Works For Me ™. If you have any issues with it or questions about it, please open an issue on its GitHub issue tracker.

Thanks for taking a look at it and please let me know if it works for you too!


MPD Via Docker

Permalink - Posted on 2014-10-20 00:00, modified on 0001-01-01 00:00

MPD Via Docker

Today I got mpd set up inside docker to replace running it locally.

Being the perfectionist I am, I also got a simple web UI for mpd (ympd) set up.

You can find the source repos here:

Readmes will be in each repository shortly.


Pursuit of a DSL

Permalink - Posted on 2014-08-16 00:00, modified on 0001-01-01 00:00

Pursuit of a DSL

A project we have been working on is Tetra. It is an extended services package in Go with Lua and Moonscript extensions. While writing Tetra, I have found out how to create a Domain Specific Language, and I would like to recommend Moonscript as a toolkit for creating DSL’s.

Moonscript is a high level wrapper around Lua designed to make programming easier. We have used Moonscript heavily in Tetra because of how easy it is to make very idiomatic code in it.

Here is some example code from the Tetra codebase for making a command:

require "lib/elfs"

Command "NAMEGEN", ->
  "> #{elfs.GenName!\upper!}"

That’s it. That creates a command named NAMEGEN that uses lib/elfs to generate goofy heroku-like application names based on names from Pokemon Vietnamese Crystal.

In fact, because this is so simple and elegant, you can document code like this inline.

Command Tutorial

In this file we describe an example command TEST. TEST will return some information about the place the command is used as well as explain the arguments involved.

Because Tetra is a polyglot of Lua, Moonscript and Go, the relevant Go objects will have their type definitions linked to on godoc

Declaring commands is done with the Command macro. It takes in two arguments.

  1. The command verb
  2. The command function

It also can take in 3 arguments if the command needs to be restricted to IRCops only.

  1. The command verb
  2. true
  3. The command function

The command function can have up to 3 arguments set when it is called. These are:

  1. The Client that originated the command call.
  2. The Destination or where the command was sent to. This will be a Client if the target is an internal client or a Channel if the target is a channel.
  3. The command arguments as a string array.
Command "TEST", (source, destination, args) ->

All scripts have client pointing to the pseudoclient that the script is spawned in. If the script name is chatbot/8ball, the value of client will point to the chatbot pseudoclient.

  client.Notice source, "Hello there!"

This will send a NOTICE to the source of the command saying “Hello there!”.

  client.Notice source, "You are #{source.Nick} sending this to #{destination.Target!} with #{#args} arguments"

All command must return a string with a message to the user. This is a good place to do things like summarize the output of the command or if it worked or not. If the command is oper-only, this will be the message logged to the services snoop channel.

  "End of TEST output"

See? That easy.

Command "TEST", ->
    "Hello!"

This is much better than Cod’s

#All modules have a name and description
NAME="Test module"
DESC="Small example to help you get started"

def initModule(cod):
    cod.addBotCommand("TEST", testbotCommand)

def destroyModule(cod):
    cod.delBotCommand("TEST")

def testbotCommand(cod, line, splitline, source, destination):
    "A simple test command"
    return "Hello!"


Thoughts on Community Management

Permalink - Posted on 2014-07-31 00:00, modified on 0001-01-01 00:00

Thoughts on Community Management

Many open source community projects lack proper management. They can put too much of their resources in too few places. When that one person falls out of contact or goes rogue on everyone, it can have huge effects on everyone involved in the project. Users, Contributors and Admins.

Here, I propose an alternative management structure based on what works.

Organization

Contributors and Project Administrators are there to take input/feedback from Users, rectify the situation or explain why doing so is counterproductive. Doing so will be done kindly and will be ran through at least another person before it is posted publicly. This includes (but is not limited to) email, IRC, forums, anything. A person involved in the project is a representative of it. They are the face of it. If they are rude it taints the image of everyone involved.

Access

Project Administrators will have full, unfiltered access to anything the project has. This includes root access, billing access, everything. There will be no reason to hide things. Operational conversations will be shared. All group decisions will be voted on with a simple Yes/No/Abstain process. As such this team should be kept small.

Contributions

Contributors will have to make pull requests, as will Administrators. There will be review on all changes made. No commits will be pushed to master by themselves unless there is approval. This will allow for the proper review and testing procedures to be done to all code contributed.

Additionally, for ease of scripts scraping the commits when something is released, a commit style should be enforced.

Commit Style

The following section is borrowed from Deis’ commit guidelines.


We follow a rough convention for commit messages borrowed from CoreOS, who borrowed theirs from AngularJS. This is an example of a commit:

feat(scripts/test-cluster): add a cluster test command

this uses tmux to setup a test cluster that you can easily kill and
start for debugging.

To make it more formal, it looks something like this:

{type}({scope}): {subject}
<BLANK LINE>
{body}
<BLANK LINE>
{footer}

The {scope} can be anything specifying place of the commit change.

The {subject} needs to use imperative, present tense: “change”, not “changed” nor “changes”. The first letter should not be capitalized, and there is no dot (.) at the end.

Just like the {subject}, the message {body} needs to be in the present tense, and includes the motivation for the change, as well as a contrast with the previous behavior. The first letter in a paragraph must be capitalized.

All breaking changes need to be mentioned in the {footer} with the description of the change, the justification behind the change and any migration notes required.

Any line of the commit message cannot be longer than 72 characters, with the subject line limited to 50 characters. This allows the message to be easier to read on github as well as in various git tools.

The allowed {types} are as follows:

feat -> feature
fix -> bug fix
docs -> documentation
style -> formatting
ref -> refactoring code
test -> adding missing tests
chore -> maintenance

I believe that these guidelines would lead towards a harmonious community.


IRCv3.2 CHGHOST Extension

Permalink - Posted on 2013-10-04 00:00, modified on 0001-01-01 00:00

IRCv3.2 CHGHOST Extension

The chghost client capability allows a server to directly inform clients about a host or user change without having to send a fake quit and join. This capability MUST be referred to as chghost at capability negotiation time.

When enabled, clients will get the CHGHOST message to designate the host of a user changing for clients on common channels with them.

The CHGHOST message is one of the following:

:nick!user@host CHGHOST user new.host.goes.here

This message represents that the user identified by nick!user@host has changed host to another value. The first parameter is the user of the client. The second parameter is the new host the client is using.

On irc daemons with support for changing the user portion of a client, the second form may appear:

:nick!user@host CHGHOST newuser host

If specified, a client may also have their user and host changed at the same time:

:nick!user@host CHGHOST newuser new.host.goes.here

This second and third form should only be seen on IRC daemons that support changing the user field of a user.

In order to take full advantage of the CHGHOST message, clients must be modified to support it. The proper way to do so is this:

  1. Enable the chghost capability at capability negotiation time during the login handshake.

  2. Update the user and host portions of data structures and process channel users as appropriate.

Examples

In this example, tim!~toolshed@backyard gets their username changed to b and their hostname changed to ckyard:

:tim!~toolshed@backyard CHGHOST b ckyard

In this example, tim!b@ckyard gets their username changed to ~toolshed and their hostname changed to backyard:

:tim!b@ckyard CHGHOST ~toolshed backyard

Errata

A previous version of this specification did not include any examples, which made it unclear as to whether the de-facto ~ prefix should be included on CHGHOST messages. The new examples make clear that it should be included.