What is a JSON feed? Learn more

JSON Feed Viewer

Browse through the showcased feeds, or enter a feed URL below.

Now supporting RSS and Atom feeds thanks to Andrew Chilton's feed2json.org service

CURRENT FEED

Christine Dodrill's Blog

My blog posts and rants about various technology things.

A feed by Christine Dodrill

JSON


Anbernic RG280M Review

Permalink - Posted on 2021-09-04 00:00

When I started this blog a few years ago, I never thought I'd end up covering a lot of the things that I currently cover. Today I'm covering something completely different to my normal blog fare. I'm going to talk about a handheld console that I got recently to get my retro game fix on the go, the Anbernic RG280M.

A picture of the RG280M handheld

People don't really expect this out of me for some reason, but I am a gamer. I play a lot of games old and new, and I've wanted to get into some older games; but without having to tether myself to a PC in the basement. Enter the RG280M. The RG280M is a pocket-size handheld that uses OpenDingux and RetroArch to emulate a wide array of systems, basically everything you could think of right up to the original PlayStation.

The big few games I wanted to get out of this were some SNES romhacks (Hyper Metroid and some other Super Mario World hacks like Invictus), DOS games (particularly Cosmo's Cosmic Adventure), Gameboy Advance games like Mario and Luigi: Superstar Saga and a good Tetris round or two. When I was messing with the RG280M, it knocked everything out of the park save DOS emulation (which I was able to fix once I installed an optimized port of dosbox).

This was also one of my first orders from AliExpress. AliExpress is a sort of consumer focused view of Alibaba (kinda like the Amazon of the asian continent) where you can buy single units of things instead of having to order in bulk. I originally thought I was going to get an RG351M (and the case I got actually shows the RG351M name), but through misunderstanding the post I ended up with this RG280M instead. I don't understand why they put totally separate models of gaming system in the size/color selection area, but apparently they did and I misread things so I have this console. I also got a car decal and a few notebooks, and those have turned out to be pretty great (though the decal came bent).

Cadey is enby
<Cadey> I wanted to get the RG351M for its wifi so I could have it on my Tailscale network for the meme, but the RG280M is a fine system on its own.

Something neat about OpenDingux is that it allows you to install additional applications using opk files, which are a squashfs of an application binary and any additional data files that the program needs. Through this I was able to install things such as Super Mario 64, which lets me get a surprising amount of extra fun that way. The Super Mario 64 port runs flawlessly and the only complaints I have about it are complaints that I had with the original N64 game.

Mara is happy
<Mara> If you are wanting to get into retro handheld devices, seriously check out the RetroGameCorps YouTube channel. It is phenomenal. It has both video and written writeups on how to do simple and advanced things with retro emulation devices and is honestly the kind of quality that we strive for on this blog.

The stock firmware of the RG280M is functional, but it can be a bit odd to use. It's very easy to modify that into a custom image though because of how the RG280M stores data. It uses 2 MicroSD cards, one for your games and the other for the OS and savedata.

A picture of the two TF/MicroSD cards

Mara is hacker
<Mara> The "TF" acronym here means TransFlash, which was the original name for MicroSD cards and is notably not under the same kind of trademark protection that MicroSD is. As such, many retro emulation devices like this will use TF as the acronym to avoid either licensing costs or trademark infringement.

This means that you can flash a new firmware image to the system one and then go from there. I personally use the Adam Image on my system. It has better RetroArch integration and includes a game of 2048 by default.

One of my bigger grips with RetroArch is that I haven't found a way to selectively do screensize scaling on a per-core basis (GameBoy roms kinda need scaling but I really do not want scaling on SNES or GBA roms to avoid distorting the image), however I'm pretty sure I'm missing something obvious in the giant list of RetroArch settings.

Cadey is coffee
<Cadey> If you know what I'm doing wrong here, please let me know.

Something really refreshing about this system is how darn easy it is to modify it. I can just replace the OS it's running with custom firmware. If I want to upgrade storage, I can pop in a bigger SD card. If I want to tweak things, I can. I can even develop my own software for it and have an easy distribution method for it in the form of OPK files. It's a very refreshing thing compared to the difficulties that I have running things on my iPhone. The device comes with a root shell out of the box and you can connect to it over SSH via a USB cable (remember that this doesn't have a wifi card in it so you need to do networking over USB). Software gets categorized and everything just works out for you with little effort required.

The game I've gotten the most playtime out of is Hyper Metroid, a sort of enhanced and remixed hack of Super Metroid that does some really interesting experimental takes on the Metroid ammo system (Missiles, Super Missiles and Power Bombs all pull from the same ammo pool instead of having separate pools per weapon), and it runs flawlessly on the RG280M. One of the tests I have for dpads on game controllers is if you can do wall jumps in Super Metroid, and the 280M passes that test with flying colors. It's a 5 frame window of having to do a complete reversal of the dpad, and some controllers (like the Xbox 360 controller) simply do not give you enough precision to get it done without extraneous inputs that would mess up the walljump timing.

With the default configuration, there is an amazing level of gamefeel on everything I've played. The system is snappy and responsive, so tight platforming in Mario games works amazingly. There's no slowdown or lag when playing anything I can throw at it. It Just Works. I'm able to play games from my childhood on the go without too much configuration or effort. If you are looking for something like this, you can't go wrong with the RG280M. It's about CAD$100 after currency conversion is done (AliExpress wanted me to pay for it in euros for some reason, so it was something like 86 euros in case you want to do the conversion to your currency of choice). It's been well worth the money in my book.

The battery life gets me about 6 hours of playtime, which is more than enough for my needs. It's nowhere near the legendary battery life of the GBA or DS Lite, but it's more than sufficient for what it's doing. It's got better battery life than the Switch, so that's probably good enough for longer road trips.

It also gets a huge thumbs up from me for having USB-C to charge. This is something that makes a lot of sense and it's kind of baffling that this cheapo emulator console from China can do USB-C properly and Apple can't put USB-C on an iPhone. It's one less cable I need to carry in my bag.

Overall I'd rate this device at an 8/10. It's not perfect, there are some very minor things that I bet could be improved on in future iterations (I'd love to see a higher resolution screen and maybe DS emulation support); however it delivers what it sets out to deliver and does it smiling. On-device wifi would be an added bonus (it would be really damn convenient to SFTP games over my Tailnet, or even write something that would listen for files over Taildrop and automagically sort them into the right folders), but I can live without it.

If you want to play DOS games on it, be sure to get this dosbox port as it is a lot more performant than the one that comes out of the box. It will turn 10-ish frame per second gameplay of Cosmo's Cosmic Adventure into a full vsync fully playable experience.

If you are in the market for this kind of device, you really can't go wrong with the Anbernic RG280M. It is a solid little chonker and will do everything it says it can on the box.


I Forgive Me

Permalink - Posted on 2021-08-22 00:00

I took a shower. These words came to me while I was analyzing my life during the shower. I kept them fresh in my heart and built on them while I was taking that shower. I wrote them down here.


I forgive me.

Oftentimes I feel the urge to fight against myself for the things that happen in the world around me. This has created a scenario where I am both more prone to "failure" and deathly afraid of it. By beating myself up so consistently I have created more harm than I was hoping to avoid by doing that in the first place. I was spanked as a child when I did certain kinds of misbehavior. That happened. It's in my past and it can't unhappen. I need to take care to make sure the cycle does not continue by starting it on myself. Even if I feel like things are a "failure". Even if other people report that it is a "failure". I remain.

I forgive me for the things that have happened. The self is shaped and molded by the past that the self experiences, which means that the self can become an avatar of all those who have hurt you and those you have hurt; but at the same time it is also representative of all of those who have loved you and you have loved in return.

I guess the beating up happens because instinctively I am expecting there to be someone to be punished; someone to be hurt; someone to bear the weight of the "failure". But that doesn't need to happen. People don't need to be hurt because of "failure".

My self is the closest link I have to my past. To all the things that have hurt me and all the things that have loved me. In doing what I have been doing, I have created a war within myself that is only serving to sabotage me and I cannot have this continue any longer. This does not serve me and I need to cut it out so the things that do serve me can remain.

I need to be more comfortable with "failure", for "failure" is how we learn. The road to healing trauma is a step one by one down a miles long road, but I will take that first step, and the next; and the next; and the next; and the next; all the way for the rest of my life.

I forgive me for beating up my closest ally. I forgive me for beating up myself.

Going forward, I will love where I hated in the past.


Spaceship Adventure

Permalink - Posted on 2021-08-19 00:00

I made a little interactive fiction story! You can find it here. This was written as a result of a terrible idea I had after watching some QuakeCon announcements.

I wonder if I can get away with using an <iframe> in 2021:

This is adapted from a twitter thread.


Spellblade Plans

Permalink - Posted on 2021-08-16 00:00

Cadey is enby
<Cadey> If you had subscribed to my Patreon, you could have read this a week ago>!

Hey, I don't normally do this kind of thing because it impacts my productivity for me to do it, but I'd like to detail out my plans for the novel I've been working on off and on for almost a year.

I've wanted to write a novel for a very long time, if only to do it and have done it and point to it on a shelf as something that I created. I have been stuck in story hell for a while, but then inspiration struck about the time that I realized I was nonbinary. Being nonbinary can be an odd thing for people that aren't used to thinking about gender that way, and I wanted to play with that idea as something the main character would live through.

In essence, Spellblade is a story about the main character Alicia coming out as nonbinary in a world that is very polarized by gender, in this case with the schools of spellcraft and bladecraft. Alicia is a spellblade, or a person that stands directly in the middle of the two diametrically opposed halves (they're more of a bladecrafter than a spellcrafter, but either way very much in the middle).

Cadey is coffee
<Cadey> Correction(2021-08-16 16:54): A previous version of this article said that Alicia was the Spellblade, not a spellblade. Spellblades are actually not uncommon in the world of the book, most are really just never aware of it being a thing or are very quiet about it due to social taboo. Many people in the book may either be a spellblade or know one without being aware of it (many spellcrafters certainly are, but could chalk it up to luck or cross-functional training without thinking too much about it). My aim is to have a mostly cerebral story about gender (without overtly mentioning gender in the book text), not tell an epic tale like Avatar: The Last Airbender. I messed up.

This book is entirely a riff on gender and is a stepping stone for me to both get better at writing and to try to convey very complicated feelings and moods about being nonbinary in a very binary world. I forget who said this, but someone that was once close to me said that simple phrases can only explain simple ideas, but more complicated things require a whole novel. This is a novel in which I try to explain the feelings and moods of being nonbinary.

That being said, this book is my first novel and as such I do not expect it to be perfect, far from it in fact. I'm pretty horrible at longer form writing like this. I've been learning a lot as I go along though, and I'm certain that if I write another novel like this in the future that it will both take a lot less time to make and likely be a lot better to boot.

As a teaser, here's something from my local drafting folder, the first scene of the book. Enjoy!


Alicia scanned across the clearing. Her cat eyes darted across the field, her ears focused forward and ready for victory. The battlefield was a wide open grassy field without any good spots to take cover. Her team was losing. Badly. Her team's flag was solidly in the hands of the enemy and every attempt to wrestle it free had failed. Alicia looked over the field and got a very terrible idea. She turned towards her friend Tistus and whispered "We're going to try a pincer attack, when you get the flag, run like your life depends on it" into his ear. Tistus nodded back and stretched his legs a little. Alicia told the other group the plan via hand signals. Alicia sent the "go" signal and they all took off sprinting.

The two groups managed to pinch the blue team together, their backs against eachother and tails interlocking for stability. Zekas, the blue team leader with the red team's flag loomed over red team and readied his wooden sword. His plan almost worked, if only he wasn't struck in the back on his right side by Alicia's matching wooden sword. Zekas dropped his sword and growled at Alicia, baring his shark teeth to her. Tistus managed to squeeze between the other blue team members and yanked the flag free. He held it for dear life in his claws and gave Alicia a signal with his tail.

Alicia took a hit to the side from one of the guards she was struggling against and a small trickle of blood started to run down her left leg. Zekas noticed that his flag was missing and saw Alicia's cut, sending him into a rage. "You witch, I'll kill you!". Zekas knocked Alicia to the ground with an uppercut and Tistus sprinted off for home base while blue team surrounded Alicia. Zekas tried to grab what he thought was the flag and got Alicia's blood on his hand. Zekas growled again and looked around, seeing Tistus fleeing like his life depended on it.

"AFTER THEM!"

Blue team took off and tried to catch up with Tistus, but the distraction was long enough that his victory was all but guaranteed. The rest of red team didn't bother to chase down blue team, they knew Tistus was the fastest sprinter at the bladecraft school.

The horn sound resonated throughout the area. Tistus had made it to the base and the exercise was over.

Red team had won.

Puri, one of Alicia's teammates, helped Alicia up and gave her a bandage. Alicia dressed her wound and felt a surge of lightning race up her hand. The exact kind of surge she didn't want to have around other people.

No no no no no no. Focus, calm, let it drain to the lightning rod in my heart. Alicia ran through the exercises her father told her to do and the feeling settled. The surge had stopped, but so had the bleeding. Alicia and the rest of her team started walking back to camp, catching up with Zekas' slow gait.

Zekas growled at Alicia in frustration, "Nice trick with that fake flag. You got me."

"Trick? Oh, the guard to my left cut my leg with his sword. I think you got got."

Zekas facepalmed and looked at Alicia. No real emotion, he just looked at her. Alicia's roughly six foot snep frame. There was a trickle of blood down her left side, but other than that her white fur and gray spots were well-kept, with her chocolate brown hair tied into a battle bun. "Lemme see that wound."

"I'm gonna let the apothecary take a look at it, it's still hurting but I can walk on it."

Alicia and Zekas started walking back to camp with the remnants of their teams.

Zekas laughed, "At least you didn't let that witch beat you." and he lead the way towards the teacher's garrison.

Alicia nervously laughed back and followed suit.


Paranoid NixOS on AWS

Permalink - Posted on 2021-08-11 00:00

In the last post we covered a lot of the base groundwork involved in making a paranoid NixOS setup. Today we're gonna throw this into prod by making a base NixOS image with it.

Cadey is coffee
<Cadey> Normally I don't suggest people throw these things into production directly, if only to have some kind of barrier between you and your money generator; however today is different. It's probably not completely unsafe to put this in production, but I really would suggest reading and understanding this article before doing so.

At a high level we are going to do the following:

  • Pin production OS versions using niv
  • Create a script to automatically generate a production-ready NixOS image that you can import into The Cloud
  • Manage all this using your favorite buzzwords (Terraform, Infrastructure-as-Code)
  • Install an nginx server reverse proxying to the Printer facts service

What is an Image?

Before we yolo this all into prod, let's cover what we're actually doing. There are a lot of conflicting buzzwords here, so I'm going to go out of my way to attempt to simplify them down so that we use my arbitrary definitions of buzzwords instead of what other people will imply they mean. You're reading my blog, you get my buzzwords; it's as simple as that.

In this post we are going to create a base system that you can build your production systems on top of. This base system will be crystallized into an image that AWS will use as the initial starting place for servers.

Mara is hmm
<Mara> So you create the system definition for your base system, then turn that into an image and put that image into AWS?

Cadey is enby
<Cadey> Yep! The exact steps are a little more complicated but at a high level that's what we're doing.

Base Setup

I'm going to be publishing my work for this post here, but you can follow along in this post to understand the individual steps here.

First, let's set up the environment with lorri and niv. Lorri will handle creating a cached nix-shell environment for us to run things in and niv will handle pinning NixOS to an exact version so you can get a more reproducible production environment.

Set up lorri:


$ lorri init
Aug 11 09:41:50.966 INFO wrote file, path: ./shell.nix
Aug 11 09:41:50.966 INFO wrote file, path: ./.envrc
Aug 11 09:41:50.966 INFO done
direnv: error /home/cadey/code/cadey/paranix-configs/.envrc is blocked. Run `direnv allow` to approve its content
$ direnv allow
direnv: loading ~/code/cadey/paranix-configs/.envrc
Aug 11 09:41:54.581 INFO lorri has not completed an evaluation for this project yet, nix_file: /home/cadey/code/cadey/paranix-configs/shell.nix
direnv: export +IN_NIX_SHELL

Mara is hacker
<Mara> Why are you putting the $ before every command in these examples? It looks extraneous to me.

Cadey is enby
<Cadey> The $ is there for two main reasons. First, it allows there to be a clear delineation between the commands being typed and their output. Secondly it makes it slightly harder to blindly copy this into your shell without either editing the $ out or selecting around it. My hope is that this will make you read the command and carefully consider whether or not you actually want to run it.

Set up niv:


$ niv init
Initializing
  Creating nix/sources.nix
  Creating nix/sources.json
  Importing 'niv' ...
  Adding package niv
    Writing new sources file
  Done: Adding package niv
  Importing 'nixpkgs' ...
  Adding package nixpkgs
    Writing new sources file
  Done: Adding package nixpkgs
Done: Initializing

Mara is hacker
<Mara> If you don't already have niv in your environment, you can hack around that by running all the niv commands before you set up shell.nix like this:
$ nix-shell -p niv --run 'niv blah'

And finally pin nixpkgs to a specific version of NixOS.

Mara is hacker
<Mara> At the time of writing this article, NixOS 21.05 is the stable release, so that is what is used here.


$ niv update nixpkgs -b nixos-21.05
Update nixpkgs
Done: Update nixpkgs
$ 

This will become the foundation of our NixOS systems and production images.

You should then set up your shell.nix to look like this:


let
  sources = import ./nix/sources.nix;
  pkgs = import sources.nixpkgs { };
in pkgs.mkShell {
  buildInputs = with pkgs; [
    niv
    terraform
    
    bashInteractive
  ];
};

Set Up Unix Accounts

Mara is hacker
<Mara> This step can be omitted if you are grafting this into an existing NixOS configs repository, however it would be good to read through this to understand the directory layout at play here.

It's probably important to be able to have access to production machines. Let's create a NixOS module that will allow you to SSH into the machine. In your paranix-configs folder, run this command to make a common config directory:


$ mkdir common
$ cd common

Now in that common directory, open default.nix in emacs your favorite text editor and copy in this skeleton:


# common/default.nix

{ config, lib, pkgs, ... }:

{
  imports = [ ./users.nix ];
  
  nix.autoOptimiseStore = true;

  users.users.root.openssh.authorizedKeys.keys = [ "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPg9gYKVglnO2HQodSJt4z4mNrUSUiyJQ7b+J798bwD9" ];
  
  services.tailscale.enable = true;
  
  # Tell the firewall to implicitly trust packets routed over Tailscale:
  networking.firewall.trustedInterfaces = [ "tailscale0" ];
  
  security.auditd.enable = true;
  security.audit.enable = true;
  security.audit.rules = [
    "-a exit,always -F arch=b64 -S execve"
  ];
  
  security.sudo.execWheelOnly = true;
  environment.defaultPackages = lib.mkForce [];
  
  services.openssh = {
    passwordAuthentication = false;
    allowSFTP = false; # Don't set this if you need sftp
    challengeResponseAuthentication = false;
    extraConfig = ''
      AllowTcpForwarding yes
      X11Forwarding no
      AllowAgentForwarding no
      AllowStreamLocalForwarding no
      AuthenticationMethods publickey
    '';
  };
  
  # PCI compliance
  environment.systemPackages = with pkgs; [ clamav ];
}

Mara is hacker
<Mara> Astute readers will notice that this is less paranoid than the last post. This was pared down after private feedback.

This will create common as a folder that can be imported as a NixOS module with some basic settings and then tells NixOS to try importing users.nix as a module. This module doesn't exist yet, so it will fail when we try to import it. Let's fix that by making users.nix:


# common/users.nix

{ config, lib, pkgs, ... }:

with lib;

let
  # These options will be used for user account defaults in
  # the `mkUser` function.
  xeserv.users = {
    groups = mkOption {
      type = types.listOf types.str;
      default = [ "wheel" ];
      example = ''[ "wheel" "libvirtd" "docker" ]'';
      description =
        "The Unix groups that Xeserv staff users should be assigned to";
    };
    
    shell = mkOption {
      type = types.package;
      default = pkgs.bashInteractive;
      example = "pkgs.powershell";
      description =
        "The default shell that Xeserv staff users will be given by default.";
    };
  };
  
  cfg = config.xeserv.users;

  mkUser = { keys, shell ? cfg.shell, extraGroups ? cfg.groups, ... }: {
    isNormalUser = true;
    inherit extraGroups shell;
    openssh.authorizedKeys = {
      inherit keys;
    };
  };
in {
  options.xeserv.users = xeserv.users;
  
  config.users.users = {
    cadey = mkUser {
      keys = [ "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPg9gYKVglnO2HQodSJt4z4mNrUSUiyJQ7b+J798bwD9" ];
    };
    twi = mkUser {
      keys = [ "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPYr9hiLtDHgd6lZDgQMkJzvYeAXmePOrgFaWHAjJvNU" ];
    };
  };
}

Mara is hacker
<Mara> It's worth noting that xeserv in there can be anything you want. It's set to xeserv as we are imagining that this is for the production environment of a company named Xeserv.

Paranoid Settings

Next we're going to set up the paranoid settings from the last post into a module named paranoid.nix. First we'll need to grab impermanence into our niv manifest like this:


$ niv add nix-community/impermanence
Adding package impermanence
  Writing new sources file
Done: Adding package impermanence

Then open common/default.nix and change this line:


imports = [ ./users.nix ];

To something like this:


imports = [ ./paranoid.nix ./users.nix ];

Then open ./paranoid.nix in a text editor and paste in the following:


# common/paranoid.nix

{ config, pkgs, lib, ... }:

with lib;

let
  sources = import ../nix/sources.nix;
  impermanence = sources.impermanence;
  cfg = config.xeserv.paranoid;
  
  ifNoexec = if cfg.noexec then [ "noexec" ] else [ ];
in {
  imports = [ "${impermanence}/nixos.nix" ];

  options.xeserv.paranoid = {
    enable = mkEnableOption "enables ephemeral filesystems and limited persistence";
    noexec = mkEnableOption "enables every mount on the system save /nix being marked as noexec (potentially dangerous at a social level)";
  };
  
  config = mkIf cfg.enable {
    fileSystems."/" = mkForce {
      device = "none";
      fsType = "tmpfs";
      options = [ "defaults" "size=2G" "mode=755" ] ++ ifNoexec;
    };
    
    fileSystems."/etc/nixos".options = ifNoexec;
    fileSystems."/srv".options = ifNoexec;
    fileSystems."/var/lib".options = ifNoexec;
    fileSystems."/var/log".options = ifNoexec;
    
    fileSystems."/boot" = {
      device = "/dev/disk/by-label/boot";
      fsType = "vfat";
    };

    fileSystems."/nix" = {
      device = "/dev/disk/by-label/nix";
      autoResize = true;
      fsType = "ext4";
    };

    boot.cleanTmpDir = true;

    environment.persistence."/nix/persist" = {
      directories = [
        "/etc/nixos" # nixos system config files, can be considered optional
        "/srv" # service data
        "/var/lib" # system service persistent data
        "/var/log" # the place that journald dumps it logs to
      ];
    };

    environment.etc."ssh/ssh_host_rsa_key".source =
      "/nix/persist/etc/ssh/ssh_host_rsa_key";
    environment.etc."ssh/ssh_host_rsa_key.pub".source =
      "/nix/persist/etc/ssh/ssh_host_rsa_key.pub";
    environment.etc."ssh/ssh_host_ed25519_key".source =
      "/nix/persist/etc/ssh/ssh_host_ed25519_key";
    environment.etc."ssh/ssh_host_ed25519_key.pub".source =
      "/nix/persist/etc/ssh/ssh_host_ed25519_key.pub";
    environment.etc."machine-id".source = "/nix/persist/etc/machine-id";
  };
}

This should give us the base that we need to build the system image for AWS.

Building The Image

As I mentioned earlier we need to build a system image before we can build the image. NixOS normally hides a lot of this magic from you, but we're going to scrape away all that magic and do this by hand. In your paranix-configs folder, create a folder named images. This creatively named folder is where we will store our NixOS image generation scripts.

Copy this code into build.nix. This will tell NixOS to create a new system closure with configuration in images/configuration.nix:


# images/build.nix

let
  sources = import ../nix/sources.nix;
  pkgs = import sources.nixpkgs { };
  sys = (import "${sources.nixpkgs}/nixos/lib/eval-config.nix" {
    system = "x86_64-linux";
    modules = [ ./configuration.nix ];
  });
in sys.config.system.build.toplevel

And in images/configuration.nix add this skeleton config:


# images/configuration.nix

{ config, pkgs, lib, modulesPath, ... }:

{
  imports = [ ../common (modulesPath + "/virtualisation/amazon-image.nix") ];
  
  xeserv.paranoid.enable = true;
}

Mara is hacker
<Mara> You can adapt this to other clouds by changing what module is imported. See the list of available modules here.

Then you can kick off the build with nix-build:


$ nix-build build.nix

It will take a moment to assemble everything together and when you are done you should have an entire functional system closure in ./result:


$ cat ./result/nixos-version
21.05pre-git

Mara is hacker
<Mara> It has pre-git here because we're using a pinned commit of the nixos-21.05 git branch. Release channels don't have that suffix there.

From here we need to put this base system closure into a disk image for AWS. This process is a bit more involved, but here are the high level things needed to make a disk image for NixOS (or any Linux system for that matter):

  • A virtual hard drive to install the OS to
  • A partition mapping on the virtual hard drive
  • Essential system files copied over
  • A boot configuation

We can model this using a Nix function. This function would need to take in the system config, some metadata about the kind of image to make and then it would build the image and return the result. I've made this available here so you can grab it into your config folder like this:


$ wget -O make-image.nix https://tulpa.dev/cadey/paranix-configs/raw/branch/main/images/make-image.nix

Then we can edit build.nix to look like this:


# images/build.nix

let
  sources = import ../nix/sources.nix;
  pkgs = import sources.nixpkgs { };
  config = (import "${sources.nixpkgs}/nixos/lib/eval-config.nix" {
    system = "x86_64-linux";
    modules = [ ./configuration.nix ];
  });

in import ./make-image.nix {
  inherit (config) config pkgs;
  inherit (config.pkgs) lib;
  format = "vpc"; # change this for other clouds
}

Then you can build the AWS image with nix-build:


$ nix-build build.nix

This will emit the AWS disk image in ./result:


$ ls ./result/
nixos.vhd

Mara is hacker
<Mara> AWS uses Microsoft Virtual PC hard disk files as the preferred input for their vmimport service. This is probably a legacy thing.

Terraforming

Terraform is not my favorite tool on the planet, however it is quite useful for beating AWS and other clouds into shape. We will be using Terraform to do the following:

  • Create an S3 bucket to use for storing Terraform states in The Cloud
  • Create an S3 bucket for the AMI base images
  • Create an IAM role for importing AMIs
  • Create an IAM role policy for allowing the AMI importer service to work
  • Uploading the image to S3
  • Import the image from S3 as an EBS snapshot
  • Create an AMI from that EBS snapshot
  • Create an example t2.micro virtual machine
  • Deploy an example service config for nginx that does nothing

This sounds like a lot, but it's really not as much as it sounds. A lot of this is boilerplate. The cost associated with these steps should be minimal.

In the root of your paranix-configs folder, make a folder called terraform, as this is where our terraform configuration will live:


$ mkdir terraform
$ cd terraform

Then you can proceed to the following steps.

S3 State Bucket

In that folder, make a folder called bootstrap, this configuration will contain the base S3 bucket config for Terraform state:


$ mkdir bootstrap
$ cd bootstrap

Copy this terraform code into main.tf:


# terraform/bootstrap/main.tf

provider "aws" {
  region = "us-east-1"
}

resource "aws_s3_bucket" "bucket" {
  bucket = "xeserv-tf-state-paranix"
  acl    = "private"

  tags = {
    Name = "Terraform State"
  }
}

Then run terraform init to set up the terraform environment:


$ terraform init

It will download the AWS provider and run a few tests on your config to make sure things are correct. Once this is done, you can run terraform plan:


$ terraform plan
Terraform used the selected providers to generate the following execution plan. Resource actions
are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_s3_bucket.bucket will be created
  + resource "aws_s3_bucket" "bucket" {
      + acceleration_status         = (known after apply)
      + acl                         = "private"
      + arn                         = (known after apply)
      + bucket                      = "xeserv-tf-state-paranoid"
      + bucket_domain_name          = (known after apply)
      + bucket_regional_domain_name = (known after apply)
      + force_destroy               = false
      + hosted_zone_id              = (known after apply)
      + id                          = (known after apply)
      + region                      = (known after apply)
      + request_payer               = (known after apply)
      + tags                        = {
          + "Name" = "Terraform State"
        }
      + tags_all                    = {
          + "Name" = "Terraform State"
        }
      + website_domain              = (known after apply)
      + website_endpoint            = (known after apply)

      + versioning {
          + enabled    = (known after apply)
          + mfa_delete = (known after apply)
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take
exactly these actions if you run "terraform apply" now.

Terraform is very pedantic about what the state of the world is. In this case nothing in the associated state already exists, so it is saying that it needs to create the S3 bucket that we will use for our Terraform states in the future. We can apply this with terraform apply:


$ terraform apply
<the same thing as the plan>

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value:

If you want to perform these actions, follow the instructions.


  Enter a value: yes
  
aws_s3_bucket.bucket: Creating...
aws_s3_bucket.bucket: Creation complete after 3s [id=xeserv-tf-state-paranoid]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Now that we have the state bucket, let's use it to create our AMI.

Creating the AMI

In your terraform folder, create a new folder called aws_image. This is where the terraform configuration for uploading our disk image to AWS will live.


$ mkdir aws_image
$ cd aws_image

Mara is hacker
<Mara> This part of the config is modified from the instructions on how to create an AMI from a locally created VM image here.

Make a file called main.tf and we'll add to it as we go through this section.

In main.tf, add the following boilerplate to make the AWS provider use the terraform state bucket we just created:


# terraform/aws_image/main.tf

provider "aws" {
  region = "us-east-1"
}

terraform {
  backend "s3" {
    bucket = "xeserv-tf-state-paranoid"
    key    = "aws_image"
    region = "us-east-1"
  }
}

This will tell the AWS provider to use the S3 bucket we just made, but also to put the terraform state in a key called aws_image. We will reuse this state later for making our printer facts host. After we do this, we should run terraform init to make sure that the state bucket is working:


$ terraform init
Initializing the backend...

Initializing provider plugins...
- Finding latest version of hashicorp/aws...
- Installing hashicorp/aws v3.53.0...
- Installed hashicorp/aws v3.53.0 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Now let's create the S3 bucket that we will put our NixOS image in:


# terraform/aws_image/main.tf

resource "aws_s3_bucket" "images" {
  bucket = "xeserv-ami-images"
  acl    = "private"

  tags = {
    Name = "Xeserv AMI Images"
  }
}

Then let's create the IAM role and policy that allows the VM importer service to import objects from S3 into EBS snapshots that we use to create an AMI.

In the aws_image folder, copy this trust policy statement into vmie-trust-policy.json:


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": { "Service": "vmie.amazonaws.com" },
            "Action": "sts:AssumeRole",
            "Condition": {
                "StringEquals":{
                    "sts:Externalid": "vmimport"
                }
            }
        }
    ]
}

This will be used to give the VM import service permission to act against AWS on your behalf.

In main.tf, add the following role and policy to the configuration:


# terraform/aws_image/main.tf

resource "aws_iam_role" "vmimport" {
  name               = "vmimport"
  assume_role_policy = file("./vmie-trust-policy.json")
}

resource "aws_iam_role_policy" "vmimport_policy" {
  name   = "vmimport"
  role   = aws_iam_role.vmimport.id
  policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:ListBucket",
        "s3:GetObject",
        "s3:GetBucketLocation"
      ],
      "Resource": [
        "${aws_s3_bucket.images.arn}",
        "${aws_s3_bucket.images.arn}/*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetBucketLocation",
        "s3:GetObject",
        "s3:ListBucket",
        "s3:PutObject",
        "s3:GetBucketAcl"
      ],
      "Resource": [
        "${aws_s3_bucket.images.arn}",
        "${aws_s3_bucket.images.arn}/*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "ec2:ModifySnapshotAttribute",
        "ec2:CopySnapshot",
        "ec2:RegisterImage",
        "ec2:Describe*"
      ],
      "Resource": "*"
    }
  ]
    }
EOF
}

Mara is hmm
<Mara> Why do you define the trust policy in an external file but you have the role policy defined inline?

Cadey is enby
<Cadey> Look at the Resources defined in the Statement list. The S3 bucket in question needs to be defined explicitly by its ARN, and in order to give the vmimport service the minimal possible permissions, we need to template out that policy JSON file, and doing this inline in Terraform is a lot simpler.

And now we should run terraform plan and terraform apply to make sure everything works okay:


$ terraform plan
<omitted>
Plan: 3 to add, 0 to change, 0 to destroy.

$ terraform apply
<omitted>
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

Perfect! Now we need to upload the image to S3. You are going to have to build the NixOS image outside of terraform, so run nix-build:


$ nix-build ../../build.nix

This should largely be a no-op and will put the correct result symlink in your aws_image folder so terraform can read the image metadata.

Mara is hacker
<Mara> Practically you would want to make a script to run terraform, and in the script for this folder you would probably want to add that nix-build command to that script. However this is trivial and is thus an exercise for the reader.

In your main.tf file, add this:


# terraform/aws_image/main.tf

resource "aws_s3_bucket_object" "nixos_21_05" {
  bucket = aws_s3_bucket.images.bucket
  key    = "nixos-21.05-paranoid.vhd"
  
  source = "./result/nixos.vhd"
  etag   = filemd5("./result/nixos.vhd")
}

Now we need to create the EBS snapshot. Copy this into your main.tf:


# terraform/aws_image/main.tf

resource "aws_ebs_snapshot_import" "nixos_21_05" {
  disk_container {
    format = "VHD"
    user_bucket {
      s3_bucket = aws_s3_bucket.images.bucket
      s3_key    = aws_s3_bucket_object.nixos_21_05.key
    }
  }

  role_name = aws_iam_role.vmimport.name

  tags = {
    Name = "NixOS-21.05"
  }
}

This step may take a while (more than 5 minutes), so let's run terraform plan and then terraform apply:


$ terraform plan
Plan: 2 to add, 0 to change, 0 to destroy.

$ terraform apply
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Finally you can create the AMI and export the AMI ID like this:


# terraform/aws_image/main.tf

resource "aws_ami" "nixos_21_05" {
  name                = "nixos_21_05"
  architecture        = "x86_64"
  virtualization_type = "hvm"
  root_device_name    = "/dev/xvda"
  ena_support         = true
  sriov_net_support   = "simple"

  ebs_block_device {
    device_name           = "/dev/xvda"
    snapshot_id           = aws_ebs_snapshot_import.nixos_21_05.id
    volume_size           = 40 # you can go as low as 8 GB, but 40 is a nice number
    delete_on_termination = true
    volume_type           = "gp3"
  }
}

output "nixos_21_05_ami" {
  value = aws_ami.nixos_21_05.id
}

Then run terraform plan and terraform apply:


$ terraform plan
Plan: 1 to add, 0 to change, 0 to destroy.

$ terraform apply
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Outputs:

nixos_21_05_ami = "ami-0f43f74cbbdd1ddef"

Et voila! We have a NixOS base image that we can use for production workloads. Let's use it to create a NixOS server running the printer facts service.

Mara is hacker
<Mara> KEEP IN MIND that this configuration means that every time you rebuild and upload this image you potentially risk breaking production machines. Don't rebuild this config more than once every 6 months (or when you bump to a new release of NixOS) at most.

Using the AMI

Let's make a new folder in the terraform folder called printerfacts. In this folder we're going to set up a new terraform state that imports the AMI state we just made and then we will use that AMI to run the printer facts service.


$ mkdir printerfacts
$ cd printerfacts

In main.tf, copy the following:


# terraform/printerfacts/main.tf

provider "aws" {
  region = "us-east-1"
}

terraform {
  backend "s3" {
    bucket = "xeserv-tf-state-paranoid"
    key    = "printerfacts"
    region = "us-east-1"
  }
}

Now you can terraform init as normal to ensure everything is working as we expect:


$ terraform init
Terraform has been successfully initialized!

Then let's add the aws_image state as a data source. This will let us reference the AMI ID from the remote state file instead of having to build it from scratch every time.


# terraform/printerfacts/main.tf

data "terraform_remote_state" "aws_image" {
  backend = "s3"
  
  config = {
    bucket = "xeserv-tf-state-paranoid"
    key    = "aws_image"
    region = "us-east-1"
  }
}

AWS wants us to create a keypair for the instance, so to make AWS happy we will make a keypair like this:


# terraform/printerfacts/main.tf

resource "tls_private_key" "state_ssh_key" {
  algorithm = "RSA"
}

resource "aws_key_pair" "generated_key" {
  key_name   = "generated-key-${sha256(tls_private_key.state_ssh_key.public_key_openssh)}"
  public_key = tls_private_key.state_ssh_key.public_key_openssh
}

Mara is hacker
<Mara> You will need to terraform init after this step.

Now we need to create a security group for this instance. This security group should do the following:

  • Allow port 22 (ssh) ingress
  • Allow port 80 (http) ingress
  • Allow ICMP (ping) ingress
  • Allow ICMP (ping) egress
  • Allow TCP egress on all ports to everywhere
  • Allow UDP egress on all ports to everywhere

You can do this with this terraform fragment:


# terraform/printerfacts/main.tf

resource "aws_security_group" "printerfacts" {
  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    from_port   = -1
    to_port     = -1
    protocol    = "icmp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port   = -1
    to_port     = -1
    protocol    = "icmp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port   = 0
    to_port     = 65535
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port   = 0
    to_port     = 65535
    protocol    = "udp"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

Then we can create the AWS instance using our AMI, keypair and security group:


# terraform/printerfacts/main.tf

resource "aws_instance" "printerfacts" {
  ami           = data.terraform_remote_state.aws_image.outputs.nixos_21_05_ami
  instance_type = "t3.micro"
  security_groups = [
    aws_security_group.printerfacts.name,
  ]
  key_name = aws_key_pair.generated_key.key_name

  root_block_device {
    volume_size = 40 # GiB
  }

  tags = {
    Name = "xe-printerfacts"
  }
}

And then we can create a NixOS deploy config with the fantastic deploy_nixos module from Tweag. Copy this into your main.tf:


# terraform/printerfacts/main.tf

module "deploy_printerfacts" {
  source          = "git::https://github.com/Xe/terraform-nixos.git//deploy_nixos?ref=1b49f2c6b4e7537cca6dd6d7b530037ea81e8268"
  nixos_config    = "${path.module}/printerfacts.nix"
  hermetic        = true
  target_user     = "root"
  target_host     = aws_instance.printerfacts.public_ip
  ssh_private_key = tls_private_key.state_ssh_key.private_key_pem
  ssh_agent       = false
  build_on_target = false
}

Mara is hacker
<Mara> You will need to terraform init again after this step.

Now let's make the printerfacts.nix host definition. We're going to start with a simple config to begin with. This will start nginx in a mostly broken but still semi-functional state on port 80.


# terraform/printerfacts/printerfacts.nix

let
  sources = import ../../nix/sources.nix;
  pkgs = import sources.nixpkgs { };
  system = "x86_64-linux";

  configuration = { config, lib, pkgs, ... }: {
    imports = [
      ../../common
      "${sources.nixpkgs}/nixos/modules/virtualisation/amazon-image.nix"
    ];

    networking.firewall.allowedTCPPorts = [ 22 80 ];

    xeserv.paranoid.enable = true;

    services.nginx.enable = true;
  };
in import "${sources.nixpkgs}/nixos" { inherit system configuration; }

Mara is hmm
<Mara> What is up with that config? It doesn't look like a normal NixOS module at all.

Cadey is enby
<Cadey> That is a NixOS config that will use the pinned version of nixpkgs with niv in order to build everything. It won't work everywhere, however the hermetic flag in the deploy_nixos Terraform module will make this work.

Now let's deploy all this and see if it works!


$ terraform init

$ terraform plan

$ terraform apply

Printerfacts Install

Now we can add the printerfacts service to the VM. First, add the printerfacts repo to niv:


$ niv add git -n printerfacts --repo https://tulpa.dev/cadey/printerfacts
Done: Adding package printerfacts

Then create a service definition for it in your common folder. First create the folder common/services:


$ cd ../..
$ cd common
$ mkdir services
$ cd services

Then create a default.nix file with the following contents:


# common/services/default.nix

{ ... }:

{
  imports = [ ./printerfacts.nix ];
}

And create ./printerfacts.nix with this service boilerplate:


# common/services/printerfacts.nix

{ config, pkgs, lib, ... }:

with lib;

let
  sources = import ../../nix/sources.nix;
  pkg = pkgs.callPackage sources.printerfacts { };
  cfg = config.xeserv.services.printerfacts;
in
{
  options.xeserv.services.printerfacts = {
    enable = mkEnableOption "enable Printerfacts";
    useACME = mkEnableOption "enable ACME certs";
    
    domain = mkOption {
      type = types.str;
      default = "printerfacts.akua";
      example = "printerfacts.cetacean.club";
      description =
        "The domain name that nginx should check against for HTTP hostnames";
    };
    
    port = mkOption {
      type = types.int;
      default = 28318;
      example = 9001;
      description =
        "The port number printerfacts should listen on for HTTP traffic";
    };
  };
  
  config = mkIf cfg.enable {
    systemd.services.printerfacts = {
      wantedBy = [ "multi-user.target" ];
      
      script = ''
        export PORT=${toString cfg.port}
        export DOMAIN=${toString cfg.domain}
        export RUST_LOG=info
        exec ${pkg}/bin/printerfacts
      '';

      serviceConfig = {
        Restart = "always";
        RestartSec = "30s";
        WorkingDirectory = "${pkg}";
        RuntimeDirectory = "printerfacts";
        RuntimeDirectoryMode = "0755";
        StateDirectory = "tailscale";
        StateDirectoryMode = "0750";
        CacheDirectory = "tailscale";
        CacheDirectoryMode = "0750";
        DynamicUser = "yes";
      };
    };
    
    services.nginx.virtualHosts."${cfg.domain}" = {
      locations."/" = {
        proxyPass = "http://127.0.0.1:${toString cfg.port}";
        proxyWebsockets = true;
      };
      enableACME = cfg.useACME;
    };
  };
}

Then wire up common/default.nix with this:


# common/default.nix

imports = [ ./paranoid.nix ./users.nix ./services ];

Then you can add this to your machine config in the terraform directory:


# terraform/printerfacts/printerfacts.nix

configuration = { config, lib, pkgs, ... }: {
  # ...
 
  xeserv.services.printerfacts = {
    enable = true;
    domain = "3.237.88.228"; # replace this with the IP of your AWS instance
  };
};

Then terraform plan and terraform apply:


$ terraform plan

$ terraform apply

And finally get yourself a hard-earned printer fact:


$ curl http://3.237.88.228/fact
In 1987 printers overtook scanners as the number one pet in America.


We have gone from nothing to a fully production-ready NixOS deployment including a custom AMI pinned to an exact version of NixOS and an additional service added from its git repo. This will allow you to create a NixOS deployment that can be used by multiple people but will also stay pinned to an exact version of NixOS. Terraform will do all of the NixOS building and ensure that things are kept up to date, meaning that your infrastructure is all configured using the same workflow.

This post outlines boilerplate and templates. I'm sure that you could easily adapt these templates for other things as well. If you need to store persistent data, make sure its being put in /var/lib so that it isn't wiped on reboot. This took at least a week of research, banging my head against the wall and so many failures to implement this. Many thanks to Graham Christensen for unblocking me on this and pulling me back from the chasm a few times.

Hope this helps your prod NixOS adventures!