What is a JSON feed? Learn more

JSON Feed Viewer

Browse through the showcased feeds, or enter a feed URL below.

Now supporting RSS and Atom feeds thanks to Andrew Chilton's feed2json.org service


Henri Bergius

Hacker and an occasional adventurer. Author of Create.js and NoFlo, founder of Flowhub UG. Decoupling software, one piece at a time. This blog tells the story of that.


Cruising sailboat electronics setup with Signal K

Permalink - Posted on 2020-03-27 00:00

I haven’t mentioned this on the blog earlier, but in the end of 2018 we bought a small cruising sailboat. After some looking, we went with a Van de Stadt designed Oceaan 25, a Dutch pocket cruiser from the early 1980s. S/Y Curiosity is an affordable and comfortable boat for cruising with 2-4 people, but also needed major maintenance work.

Curiosity sailing on Havel with Royal Louise

The refit has so far included osmosis repair, some fixes to the standing rigging, engine maintenance, and many structural improvements. But this post will focus on the electronics and navigation aspects of the project.

12V power

When we got it, the boat’s electrics setup was quite barebones. There was a small lead-acid battery, charged only when running the outboard. Light control was pretty much all-or-nothing, either we were running inside and navigation lights, or not. Everything was wired with 80s spec components, using energy-inefficient lightbulbs.

Looking at the state of the setup, it was also unclear when the electrics had been used for anything else than starting the engine last time.

Before going further with the electronics setup, all of this would have to be rebuilt. We made a plan, and scheduled two weekends in summer 2019 for rewiring and upgrading the electricity setup of the boat.

First step was to test all existing wiring with a multimeter, and label and document all of it. Surprisingly, there were only couple of bad connections from the main distribution panel to consumers, so for most part we decided to reuse that wiring, but just with a modern terminal block setup.

All wires labeled and being reconnected

For most part we used a dymo label printer, with the labels covered with a transparent heat shrink.

We replaced the old main control panel with a modern one with the capability to power different parts of the boat separately, and added some 12V and USB sockets next to it.

New battery charger and voltmeter

All internal lighting was replaced with energy-efficient LEDs, and we added the option of using red lights all through the cabin for preserving night vision. A car charger was added to the system for easier battery charging while in harbour.

Next steps for power

With this, we had a workable lighting and power setup for overnight sailing. But next obvious step will be to increase the range of our boat.

For that, we’re adding a solar panel. We already have most parts for the setup, but are still waiting for the customized NOA mounting hardware to arrive. And of course the current COVID-19 curfews need to lift before we can install it.

Until we have actual data from our Victron MPPT charge controller, I’ve run some simulations using NASA’s insolation data for Berlin on how much the panel ought to increase our cruising range.

Range estimates for Curiosity solar setup

The basis for boat navigation is still the combination of a clock, a compass, and a paper chart (as well as a sextant on the open ocean). However, most modern cruising boats utilize some electrical tools to aid the process of running the boat. These typically come in form a chartplotter and a set of sensors to get things like GPS position, speed, and the water depth.

Commercial marine navigation equipment is a bit like computer networking in the 90s - everything is expensive, and you pretty much have to buy the whole kit from a single vendor to make it work. Standards like NMEA 0183 exist, but “embrace and extend” is typical vendor behaviour.

Signal K

Being open source hackerspace people, that was obviously not the way we wanted to do things. Instead of getting locked into an expensive proprietary single-vendor marine instrumentation setup, we decided to roll our own using off-the-shelf IoT components. To serve as the heart of the system, we picked Signal K.

Signal K is first of all a specification on how marine instruments can exchange data. It also has an open source implementation in Node.js. This allows piping in data from all of the relevant marine data buses, as well as setting up custom data providers. Signal K then harmonizes the data, and makes it available both via modern web APIs, and in traditional NMEA formats. This enables instruments like chartplotters also to utilize the Signal K enriched data.

We’re running Signal K on a Raspberry Pi 3B+ powered by the boat battery. With a GPS dongle, this was already enough to give some basic navigation capabilities like charts and anchor watch. We also added a WiFi hotspot with a LTE uplink to the boat.

Tracking some basic sailing exercises via Signal K

To make the system robust, installation is automated via Ansible, and easy to reproduce. Our boat GitHub repo also has the needed functionality to run a clone of our boat’s setup on our laptops via Docker, which is great when developing new features.

Signal K has a very active developer community, which has been great for figuring out how the extend the capabilities of our system.


We’re using regular tablets for navigation. The main chartplotter is a cheap old waterproof Samsung Galaxy Tab Active 8.0 tablet that can show both the Freeboard web-based chartplotter with OpenSeaMap charts, and run the Navionics Boating app to display commercial charts. Navionics is also able to receive some Signal K data over the boat WiFi to show things like AIS targets, and to utilize the boat GPS.

Samsung T360 with Curiosity logo

As a backup we have our personal smartphones and tablets.

Anchor watch with Freeboard and a tablet

Inside the cabin we also have an e-ink screen showing the primary statistics relevant to the current boat state.

e-ink dashboard showing boat statistics

Environmental sensing

Monitoring air pressure changes is important for dealing with the weather. For this, we added a cheap barometer-temperature-humidity sensor module wired to the Raspberry Pi, driven with the Signal K BME280 plugin. With this we were able to get all of this information from our cabin into Signal K.

However, there was more environmental information we wanted to get. For instance, the outdoor temperature, the humidity in our foul weather gear locker, and the temperature of our icebox. For these we found the Ruuvi tags produced by a Finnish startup. These are small weatherproofed Bluetooth environmental sensors that can run for years with a coin cell battery.

Ruuvi tags for Curiosity with handy pouches

With Ruuvi tags and the Signal K Ruuvi tag plugin we were able to bring a rich set of environmental data from all around the boat into our dashboards.

Anchor watch

Like every cruising boat, we spend quite a lot of nights at anchor. One important safety measure with a shorthanded crew is to run an automated anchor watch. This monitors the boat’s distance to the anchor, and raises an alarm if we start dragging.

For this one, we’re using the Signal K anchor alarm plugin. We added a Bluetooth speaker to get these alarms in an audible way.

To make starting and stopping the anchor watch easier, I utilized a simple Bluetooth remote camera shutter button together with some scripts. This way the person dropping the anchor can also start the anchor watch immediately from the bow.

Camera shutter button for starting anchor watch


Automatic Identification System is a radio protocol used by most bigger vessels to tell others about their course and position. It can be used for collision avoidance. Having an active transponder on a small boat like Curiosity is a bit expensive, but we decided we’d at least want to see commercial traffic in our chartplotter in order to navigate safely.

For this we bought an RTL-SDR USB stick that can tune into the AIS frequency, and with the rtl_ais software, receive and forward all AIS data into Signal K.

Tracking AIS targets in Freeboard

This setup is still quite new, so we haven’t been able to test it live yet. But it should allow us to see all nearby bigger ships in our chartplotter in realtime, assuming that we have a good-enough antenna.

Putting it all together

All together this is quite a lot of hardware. To house all of it, we built a custom backing plate with 3D-printed brackets to hold the various components. The whole setup is called Voronoi-1 onboard computer. This is a setup that should be easy to duplicate on any small sailing vessel.

The Voronoi-1 onboard computer

The total cost so far for the full boat navigation setup has been around 600€, which is less than just a commercial chartplotter would cost. And the system we have is both easy to extend, and to fix even on the go. And we get a set of capabilities that would normally require a whole suite of proprietary parts to put together.

Next steps for navigation setup

We of course have plenty of ideas on what to do next to improve the navigation setup. Here are some projects we’ll likely tackle over the coming year:

  • Adding a timeseries database and some data visualization
  • 9 degrees of freedom sensor to track the compass course, as well as boat heel
  • Instrumenting our outboard motor to get RPMs into Signal K and track the engine running time
  • Wind sensor, either open source or commercial

If you have ideas for suitable components or projects, please get in touch!

Source code

Huge thanks to both the Signal K and Hackerfleet communities and the Curiosity crew for making all this happen.

Now we just wait for the curfews to lift so that we can get back to sailing!

Curiosity Crew Badge

Building c-base @ 35C3 with Flowhub

Permalink - Posted on 2019-01-05 00:00

The 35th Chaos Communication Congress is now over, and it is time to write about how we built the software side of the c-base assembly there.

c-base at 35C3

The Chaos Communication Congress is a major fixture of the European security and free software scene, with thousands of attendees. As always, the “mother of all hackerspaces” had a big presence there, with a custom booth that we spend nearly two weeks constructing.

the c-base assembly at 35C3

This year’s theme was “Refreshing Memories”, and accordingly we brought various elements of the history of the c-base space station to the event. On hardware side we had things like a scale model the c-base antenna, as well as vintage arcade machines and various artifacts from over the years.

With software, we utilized the existing IoT infrastructure at c-base to control lights, sound, and drive videos and other information to a set of information displays. All of course powered by Flowhub.

This was a quite full-stack development effort, involving microcontroller firmware programming, server-side NoFlo and MsgFlo development, and front-end infoscreen web design. We also did quite a bit of devopsing with Travis CI, Docker, and docker-compose.

Local MsgFlo setup

The first step in bringing c-base’s IoT setup was to prepare a “portable” version of the environment. An MQTT broker, MsgFlo, some components, and a graph with any on-premise c-base hardware or service dependencies removed. As this was for a CCC event, we decided to call it c3-flo (in comparison to the c-flo that we run at c-base).

We already have a quite nice setup where our various systems get built and tested on Travis, and uploaded to Docker Hub’s cbase namespace. Some repositories weren’t yet integrated, and so the first step was to Dockerize them.

To make the local setup simple to manage, we decided to go with a single docker-compose environment that would start all systems needed. This would be easy to run on any x86 machine, and provide us with a quite comprehensive set of features from the IoT parts to NASA’s Open MCT dashboard.

Of course we kept adding to the system throughout 35C3, but in the end the graph looked like the following:

c-base at 35C3 as seen in Flowhub

WiFi setup

To make our setup more portable, we decided to bring a local instance of the “c-base botnet” WiFi used to Congress. This way all of our IoT devices could work at 35C3 with the exact same firmware and networking setup as they do at c-base.

Normally Congress doesn’t recommend running your own access point. But if needed, there are guidelines available on how to do it properly if needed. As it happens, out of this year’s 558 unofficial access points, the c-base one was the only one conforming to the guidelines (commentary around the 25 minute mark).

WiFi numbers from 35C3

Info displays

Like any station, c-base has a set of info screens showing various announcements, timelines, and statistics. These are built with Raspberry Pi 3s running Chrome in Kiosk Mode, with a single-page webapp that connects to our MsgFlo infrastructure over WebSockets with msgflo-browser.

Each screen has a customized rotation of different pages to show, and we can send URLs to announce events like members arriving to c-base or a space launch livestream via MQTT.

c-base info screen showing an announcement

For 35C3 we built a new set of pages tailed for the Congress experience:

  • Tweaked version of the normal c-base events view showing current and upcoming talks
  • Video player to rotate various videos from the history of c-base
  • Photo slideshow with a nice set of pictures from c-base
  • Countdown screen for some event (c-base crash, teardown of the assembly at the end of Congress)

Crashing c-base

Highlight of the whole assembly was a re-enactment of the c-base crash from billions of years ago. Triggered by a dropped bottle of space soda, this was an experience incorporating video, lights, and audio that we ran several times every day of the conference.

Crash Alarm!

The c-base crash animation was managed by a NoFlo graph integrated to the our MsgFlo setup with the standard noflo-runtime-msgflo tool. With this we could trigger the “crash” with a MQTT message (sent by a physical button), and run a timed sequence of actions on lights, a sound system, and our info screens.

Button for triggering the crash of c-base

Timeline manager

There were some new components that we had to build for this purpose. The most important was a Timeline component that was upstreamed as part of the noflo-tween animation library.

Timeline component from noflo-tween

With this you can define a multi-tracked timeline as JSON or YAML, with actions triggered on each track on their appropriate second. With MsgFlo this meant we could send timed commands to different devices and create a coordinated experience.

For example, our animation started by showing a short video on all info screens. When the bottle fell in the video, we triggered the appropriate soundtrack, and switched the lights through various animation modes. After the video ended, we switched to a “countdown to crash” screen, and turned all lights to a red alert mode.

After the crash happened, everything went dark for a few seconds, before the c-base assembly was returned into its normal state.

Controlling McLighting

All LED strips we used at 35C3 were run using the McLighting firmware. By default it allows switching between different light modes with a simple WebSocket API.

For our requirements, we wanted the capability to send new commands to the lights with minimal latency, and to be able to restore the lights to whatever mode they had before the crash started in the end.

noflo-mclighting in action

The component is available in noflo-mclighting. The only thing you need is running the NoFlo graph in the same network as the LED strips, and to send the WebSocket addresses of your LED strips to the component. After that you can control them with normal NoFlo packets.


The whole setup took a couple of days to get right, especially regarding timings and tweaking the light modes. But, it was great! You can see a video of it below:

And if you’re interested in experimenting this stuff, check out the “portable c-base IoT setup” at https://github.com/c-base/c3-flo.

Managing a developer shell with Docker

Permalink - Posted on 2018-04-19 00:00

When I’m not in Flowhub-land, I’m used to developing software in a quite customized command line based development environment. Like for many, the cornerstones of this for me are vim and tmux.

As customization increases, it becomes important to have a way to manage that and distribute it across the different computers. For years, I’ve used a dotfiles repository on GitHub together with GNU Stow for this.

However, this still means I have to install all the software and tools before I can have my environment up and running.

Using Docker

Docker is a tool for building and running software in a containerized fashion. Recently Tiago gave me the inspiration to use Docker not only for distributing production software, but also for actually running my development environment.

Taking ideas from his setup, I built upon my existing dotfiles and built a reusable developer shell container.

With this, I only need Docker installed on a machine, and then I’m two commands away from having my normal development environment:

$ docker volume create workstation
$ docker run -v ~/Projects:/projects -v workstation:/root -v ~/.ssh:/keys --name workstation --rm -it bergie/shell

Here’s how it looks in action:

Working on NoFlo inside Docker shell

Once I update my Docker setup (for example to install or upgrade some tool), I can get the latest version on a machine with:

$ docker pull bergie/shell

At least in theory this should give me a fully identical working environment regardless of the host machine. Linux VPS, a MacBook, or a Windows machine should all be able to run this. And soon, this should also work out of the box on Chromebooks.

Setting this up

The basics are pretty simple. I already had a repository for my dotfiles, so I only needed to write a Dockerfile to install and set up all my software.

To make things even easier, I configured Travis so that every time I push some change to the dotfiles repository, it will create and publish a new container image.

Further development ideas

So far this setup seems to work pretty well. However, here are some ideas for further improvements:

  • ARM build: Sometimes I need to work on Raspberry Pis. It might be nice to cross-compile an ARM version of the same setup
  • Key management: Currently I create new SSH keys for each host machine, and then upload them to the relevant places. With this setup I could use a USB stick, or maybe even a Yubikey to manage them
  • Application authentication: Since the Docker image is public, it doesn’t come with any secrets built in. This means I still need to authenticate with tools like NPM and Travis. It might be interesting to manage these together with my SSH keys
  • SSH host: with some tweaking it might be possible to run the same container on cloud services. Then I’d need a way to get my SSH public keys there and start an SSH server

If you have ideas on how to best implement the above, please get in touch.

MicroFlo and IoT: measuring air quality

Permalink - Posted on 2018-02-26 00:00

Fine particulate matter is a serious issue in many cities around the world. In Europe, it is estimated to cause 400.000 premature deaths per year. European Union has published standards on the matter, and warned several countries that haven’t been able to reach the safe limits.

Germany saw the highest number of deaths attributable to all air pollution sources, at 80,767. It was followed by the United Kingdom (64,351) and France (63,798). These are also the most populated countries in Europe. (source: DW)

The associated health issues don’t come cheap: 20 billion euros per year on health costs alone.

“To reduce this figure we need member states to comply with the emissions limits which they have agreed to,” Schinas said. “If this is not the case the Commission as guardian of the (founding EU) treaty will have to take appropriate action,” he added. (source: phys.org)

One part of solving this issue is better data. Government-run measurement stations are quite sparse, and — in some countries — their published results can be unreliable. To solve this, Open Knowledge Foundation Germany started the luftdaten.info project to crowdsource air pollution data around the world.

Last saturday we hosted a luftdaten.info workshop at c-base, and used the opportunity to build and deploy some particulate matter sensors. While luftdaten.info has a great build guide and we used their parts list, we decided to go with a custom firmware built with MicroFlo and integrated with the existing IoT network at c-base.

Building an air quality sensor

MicroFlo on ESP8266

MicroFlo is a flow-based programming runtime targeting microcontrollers. Just like NoFlo graphs run inside a browser or Node.js, the MicroFlo graphs run on an Arduino or other compatible device. The result of a MicroFlo build is a firmware that can be flashed on a microcontroller, and which can be live-programmed using tools like Flowhub.

ESP8266 is an Arduino-compatible microcontroller with integrated WiFi chip. This means any sensors or actuators on the device can easily connect to other systems, like we do with lots of different sensors already at c-base.

ESP8266 sensor in preparation

MicroFlo recently added a feature where Wifi-enabled MicroFlo devices can automatically connect with a MQTT message queue and expose their in/outports as queues there. This makes MicroFlo on an ESP8266 a fully-qualified MsgFlo participant.

Building the firmware

We wanted to build a firmware that would periodically read both the DHT22 temperature and humidity sensor, and the SDS011 fine particulate sensor, even out the readings with a running median, and then send the values out at a specified interval. MicroFlo’s core library already provided most of the building blocks, but we had to write custom components for dealing with the sensor hardware.

Thankfully Arduino libraries existed for both sensors, and this was just a matter of wrapping those to the MicroFlo component interface.

After the components were done, we could build the firmware as a Flowhub graph:

MicroFlo luftdaten graph

To verify the build we enabled Travis CI where we build the firmware both against the MicroFlo Arduino and Linux targets. The Arduino one is there to verify that the build works with all the required libraries, and the Linux build we can use for test automation with fbp-spec.

To flash the actual devices you need the Arduino IDE and Node.js. Then use MicroFlo to generate the .ino file, and flash that to the device with the IDE. WiFi and MQTT settings can be tweaked in the secrets.h and config.h files.

Sensor deployment

The recommended weatherproofing solution for these sensors is quite straightforward: place the hardware in a piece of drainage pipe with the ends turned downwards.

Since we had two sensors, we decided to install one in the patio, and the other in the c-base main hall:

Particulate matter sensor in c-base main hall

Working with the sensor data

Once the sensor devices had been flashed, they became available in our MsgFlo setup and could be connected with other systems:

Particulate matter sensor in c-base main hall

In our case, we wanted to do two things with the data:

The first one was just a matter of adding couple of configuration lines to our OpenMCT server. For the latter, I built a simple Python component.

Our sensors have been tracking for a couple of days now. The public data can be seen in the madavi service:

Readings from the c-base outdoor sensor

We’ve submitted our sensor for inclusion in the luftdaten.info database, and hopefully soon there will be another covered area in the Berlin air quality map:

luftdaten.info Berlin map

If you’d like to build your own air quality sensor, the instructions on luftdaten.info are pretty comperehensive. Get the parts from your local electronics store or AliExpress, connect them together, flash the firmware, and be part of the public effort to track and improve air quality!

Our MicroFlo firmware is a great alternative if you want to do further analysis of the data yourself, or simply want to get the data on MQTT.

asComponent: turn any JavaScript function into a NoFlo component

Permalink - Posted on 2018-02-23 00:00

Version 1.1 of NoFlo shipped this week with a new convenient way to write components. With the noflo.asComponent helper you can turn any JavaScript function into a well-behaved NoFlo component with minimal boilerplate.

Usage of noflo.asComponent is quite simple:

const noflo = require('noflo');
exports.getComponent = () => noflo.asComponent(Math.random);

In this case we have a function that doesn’t take arguments. We detect this, and produce a component with a single “bang” port for invoking the function:

Math.random as component

You can also amend the component with helpful information like a textual description and and icon:

const noflo = require('noflo');
exports.getComponent = () => noflo.asComponent(Math.random, {
  description: 'Generate a random number',
  icon: 'random',

Math.random with custom icon

Multiple inputs

The example above was with a function that does not take any arguments. With functions that accept arguments, each of them becomes an input port.

const noflo = require('noflo');

function findItemsWithId(items, id) {
  return items.filter((item) => item.id === id);

exports.getComponent = () => noflo.asComponent(findItemsWithId);

asComponent and multiple inports

The function will be called when both input ports have a packet available.

Output handling

The asComponent helper handles three types of functions:

  • Regular synchronous functions: return value gets sent to out. Thrown errors get sent to error
  • Functions returning a Promise: resolved promises get sent to out, rejected promises to error
  • Functions taking a Node.js style asynchronous callback: err argument to callback gets sent to error, result gets sent to out

With this, it is quite easy to write wrappers for asynchronous operations. For example, to call an external REST API with the Fetch API:

const noflo = require('noflo');

function getFlowhubStats() {
  return fetch('https://api.flowhub.io/stats')
    .then((result) => result.json());

exports.getComponent = () => noflo.asComponent(getFlowhubStats);

How that you have this component, it is quick to do a graph utilizing it (open in Flowhub):

Example graph with asynchronous asComponent

Here we get the BODY element of the browser runtime. When that has been loaded, we trigger the fetch component above. If the request succeeds, we process it through a string template to write a quick report to the page. If it fails, we grab the error message and write that.

Making the components discoverable

The default location for a NoFlo component is components/ComponentName.js inside your project folder. Add your new components to this folder, and NoFlo will be able to run them.

If you’re using Flowhub, you can also write the components in the integrated code editor, and they will be sent to the runtime.

We’ve already updated the hosted NoFlo browser runtime to 1.1, so you can get started with this new component API right away.

Advanced components

In many ways, asComponent is the inverse of the asCallback embedding feature we introduced a year ago: asComponent turns a regular JavaScript function into a NoFlo component; asCallback turns a NoFlo component (or graph) into a regular JavaScript function.

If you need to work with more complex firing patterns, like combining streams or having control ports, you can of course still write regular Process API components.

The regular component API is quite a bit more verbose, but at the same time gives you full access to NoFlo APIs for dealing with manually controlled preconditions, state management, and creating generators.

However, thinking about the hundreds of NoFlo components out there, most of them could be written much more simply with asComponent. This will hopefully make the process of developing NoFlo programs a lot more straightforward.

Read more NoFlo component documentation and asComponent API docs.

Publish your data on the BIG IoT marketplace

Permalink - Posted on 2018-02-12 00:00

When building IoT systems, it is often useful to have access to data from the outside world to amend the information your sensors give you. For example, indoor temperature and energy usage measurements will be a lot more useful if there is information on the outside weather to correlate with.

Thanks to the open data movement, there are many data sets available. However, many of these are hard to discover or available in obscure formats.

The BIG IoT marketplace

BIG IoT is an EU-funded research project to make datasets easier to share and discover between organizations. With it there is a common semantic standard for how datasets are served, and a centralized marketplace for discovering and subscribing to data offerings.

  • For data providers this means they can focus on providing correct information, and let the marketplace handle API tokens, discoverability, and — for commercial datasets — billing
  • For data consumers there is a single place and a single API to access multiple datasets. No need to handle different Terms of Usage or different API conventions

As an example, if you’re building a car navigation application, you can use BIG IoT to get access to multiple providers of routing services, traffic delay information, or parking spots. If a dataset comes online in a new city, it’ll automatically work with your application. No need for contract negotiations, just a query to find matching providers on-demand.

Flowhub and BIG IoT

Last summer Flowhub was one of the companies accepted into the BIG IoT first open call. In it, we received some funding to make it possible to publish data from Flowhub and NoFlo on the marketplace. In this video I’m talking about the project:

In the project we built three things:

Creating a data provider

While it is easy enough to use the BIG IoT Java library to publish datasets, the Flowhub integration we built it makes it even easier. You need your data source available on a message queue, a web API, or maybe a timeseries database. And then you need NoFlo and the flowhub-bigiot-bridge library.

The basic building block is the Provider component. This creates a Node.js application server to serve your datasets, and registers them to the BIG IoT marketplace.

NoFlo BIG IoT Provider

What you need to do is to describe your data offering. For this, you can use the CreateOffering component. You can use IIPs to categorize the data, and then a set of CreateDatatype components to describe the input and output structure your offering uses.

NoFlo BIG IoT Offering config

Finally, the request and response ports of the Provider need to be hooked to your data source. The request outport will send packets with whatever input data your subscribers provided, and you need to send the resulting output data to the response port.

Request-response loop with BIG IoT Provider

For real-world deployment, the Flowhub BIG IoT bridge repository also includes examples on how to test your offerings, and how to build and deploy them with Docker.

Here’s how a full setup with two different parking datasets looks like:

NoFlo BIG IoT parking provider

If you’re participating in the Bosch Connected World hackathon in Berlin next week, we’ll be there with the BIG IoT team to help projects to utilize the BIG IoT datasets.

This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No 688038.

My blog, the 2017 edition

Permalink - Posted on 2017-12-14 00:00

I guess every five years is a good cadence for blog redesigns. This year’s edition started as a rewrite of the technical implementation, but I ended up also updating the visuals. Here I’ll go through the design goals, and how I met them.

More robust and secure delivery

This year the web has been strongly turning towards encryption. While my site doesn’t contain any interactive elements, using HTTPS still makes it harder for malicious parties to track and modify the contents people read.

For the past five years, my blog has been hosted on GitHub Pages. While otherwise that has been a pretty robust solution, they sadly don’t support SSL for custom domains. A common workaround would be to utilize Cloudflare as a HTTPS proxy, but that only works if you let them manage your domain. Since bergie.iki.fi is a subdomain, that was off the cards.

Instead, what I did was turn towards Amazon Web Services. I used Amazon Certificate Manager with my iki subdomain to get an SSL certificate, and utilized Travis CI to build the Jekyll site and upload to S3.

From there the site updates are served using Amazon CloudFront CDN, routed using Route53.

With this, I only need to push new changes to this site’s GitHub repository, and robots will take care of the rest, from producing the HTML pages to distributing them via a global content delivery network.

And, I get the friendly green lock icon.

SSL certificate for bergie.iki.fi

Easier image rescaling

I moved the site from Midgard CMS to the Jekyll static site generator in 2012. At that point, images were stored in the same GitHub repository alongside the textual contents.

However, the sheer volume of pictures accumulated on this site over the years made the repository quite unwieldy, and so I moved them to Amazon S3 couple of years ago.

This made working with different sizes of images a bit more unwieldy, as I’d have to produce the different variants locally and upload them separately.

Now, with the new redesign I built an Amazon Lambda function to resize images on-demand. My solution is implemented in NoFlo, roughly following the ideas from this tutorial but utilizing the excellent noflo-sharp library.

This is a topic I should write about in more detail, but it turns out NoFlo works really well with Amazon Lambda. You can use any Node.js NoFlo graph there by simply wrapping it using the asCallback embedding API.

The end result is that I only need to upload original size images to S3 using some tool (NoFlo, s3cmd, AWS console, or the nice DropShare app), and I can get different sizes by tweaking the URL.

I could have gone with ImgFlo, but right now I need only rescaling, and running the whole GIMP engine felt like an overkill.

New visuals

After the technical side of the blog revamp was done, I turned towards the design aspects. I wanted more color, and also to benefit from the features of the modern web. This meant that performance-hindering things like Bootstrap, jQuery, and Google Fonts were out, since nowadays you can do pretty nice sites with pure CSS alone.

In addition to the better CDN setup, the redesign improved the site’s PageSpeed score. And I think it looks pretty good.

Here’s the front page:

2017 edition of bergie.iki.fi

For reference, here is how the 2012 edition looked like:

2012 edition of bergie.iki.fi

I also spent a bit of time to make sure the site looks nice on both smartphones and tablets, since those are the devices most people use to browse the web these days.

Here is how the site looks like on different devices, courtesy of Am I Responsive

2017 front page

2017 article page

Better content discoverability

This site has over 1000 articles, and it is easy to lost in those volumes. To make it easier to discover content, I implemented a related posts feature.

I originally wanted to use Jekyll’s Latent Semantic Indexing feature, but with this amount of content that simply blows up.

Instead, I ended up building my own hacky implementation based on categorization and similar keywords in posts using Liquid templates. This makes full site builds a bit slow, but the results seem quite good:

Related posts to the NoFlo 1.0 announcement

Staying up to date

While most people probably discover content now via Twitter or Facebook (both of which I occasionally share my things in, in addition to places like Reddit or Hacker News as needed), RSS is still the underpinning of receiving blog updates.

For this, the site is available as both:

Feel free to add one of them to the news aggregator of your choice!

I also supply /now page for current activities, inspired by the NowNowNow movement. Here is how Derek Sivers described the idea:

People often ask me what I’m doing now.

Each time I would type out a reply, describing where I’m at, what I’m focused on, and what I’m not.

So earlier this year I added a /now page to my site: https://sivers.org/now

A simple link. Easy to remember. Easy to type.

It’s a nice reminder for myself, when I’m feeling unfocused. A public declaration of priorities.

Previous redesigns

I’ve been running this site since 1997. Here is what I’ve written about some of the previous redesigns:

I hope you enjoy the new design! Let me know what you think.

Get ready for NoFlo 1.0

Permalink - Posted on 2017-11-02 00:00

After six years of work, and bunch of different projects done with NoFlo, we’re finally ready for the big 1.0. The two primary pull requests for the 1.0.0 cycle landed today, and so it is time to talk about how to prepare for it.

tl;dr If your project runs with NoFlo 0.8 without deprecation warnings, you should be ready for NoFlo 1.0

ES6 first

The primary difference between NoFlo 0.8 and 1.0 is that now we’re shipping it as ES6 code utilizing features like classes and arrow functions.

Now that all modern browsers support ES6 out of the box, and Node.js 8 is the long-term supported release, it should be generally safe to use ES6 as-is.

If you need to support older browsers, Node.js versions, or maybe PhantomJS, it is of course possible to compile the NoFlo codebase into ES5 using Babel.

We recommend new components to be written in ES6 instead of CoffeeScript.

Easier webpack builds

It has been possible to build NoFlo projects for browsers since 2013. Last year we switched to webpack as the module bundler.

However, at that stage there was still quite a lot of configuration magic happening inside grunt-noflo-browser. This turned out to be sub-optimal since it made integrating NoFlo into existing project build setups difficult.

Last week we extracted the difficult parts out of the Grunt plugin, and released the noflo-component-loader webpack loader. With this, you can generate a configured NoFlo component loader in any webpack build. See this example.

In addition to generating the component loader, your NoFlo browser project may also need two other loaders, depending how your NoFlo graphs are built: json-loader for JSON graphs, and fbp-loader for graphs defined in the .fbp DSL.

Removed APIs

There were several old NoFlo APIs that we marked as deprecated in NoFlo 0.8. In that series, usage of those APIs logged warnings. Now in 1.0 the deprecated APIs are completely removed, giving us a lighter, smaller codebase to maintain.

Here is a list of the primary API removals and the suggested migration strategy:

  • noflo.AsyncComponent class: use WirePattern or Process API instead
  • noflo.ArrayPort class: use InPort/OutPort with addressable: true instead
  • noflo.Port class: use InPort/OutPort instead
  • noflo.helpers.MapComponent function: use WirePattern or Process API instead
  • noflo.helpers.WirePattern legacy mode: now WirePattern always uses Process API internally
  • noflo.helpers.WirePattern synchronous mode: use async: true and callback
  • noflo.helpers.MultiError function: send errors via callback or error port
  • noflo.InPort process callback: use Process API
  • noflo.InPort handle callback: use Process API
  • noflo.InPort receive method: use Process API getX methods
  • noflo.InPort contains method: use Process API hasX methods
  • Subgraph EXPORTS mechanism: disambiguate with INPORT/OUTPORT

The easiest way to verify whether your project is compatible is to run it with NoFlo 0.8.

You can also make usage of deprecated APIs throw errors instead of just logging them by setting the NOFLO_FATAL_DEPRECATED environment variable. In browser applications you can set the same flag to window.


Scopes are a flow isolation mechanism that was introduced in NoFlo 0.8. With scopes, you can run multiple simultaneous flows through a NoFlo network without a risk of data leaking from one scope to another.

The primary use case for scope isolation is building things like web API servers, where you want to isolate the processing of each HTTP request from each other safely, while reusing a single NoFlo graph.

Scope isolation is handled automatically for you when using Process API or WirePattern. If you want to manipulate scopes, the noflo-packets library provides components for this.

NoFlo in/outports can also be set as scoped: false to support getting out of scopes.

asCallback and async/await

noflo.asCallback provides an easy way to expose NoFlo graphs to normal JavaScript consumers. The produced function uses the standard Node.js callback mechanism, meaning that you can easily make it return promises with Node.js util.promisify or Bluebird. After this your NoFlo graph can be run via normal async/await.

Component libraries

There are hundreds of ready-made NoFlo components available on NPM. By now, most of these have been adapted to work with NoFlo 0.8.

Once 1.0 ships, we’ll try to be as quick as possible to update all of them to run with it. In the meanwhile, it is possible to use npm shrinkwrap to force them to depend on NoFlo 1.0.

If you’re relying on a library that uses deprecated APIs, or hasn’t otherwise been updated yet, please file an issue in the GitHub repo of that library.

This pull request for noflo-gravatar is a great example of how to implement all the modernization recommendations below in an existing component library.

Recommendations for new projects

This post has mostly covered how to adapt existing NoFlo projects for 1.0. How about new projects? Here are some recommendations:

  • While NoFlo projects have traditionally been written in CoffeeScript, for new projects we recommend using ES6. In particular, follow the AirBnB ES6 guidelines
  • Use fbp-spec for test automation
  • Use NPM scripts instead of Grunt for building and testing
  • Make browser builds with webpack utilizing noflo-component-loader
  • Use Process API when writing components
  • If you expose any library functionality, provide an index file using noflo.asCallback for non-NoFlo consumers

The BIG IoT Node.js bridge is a recent project that follows these guidelines if you want to see an example in action.

There is also a project tutorial available on the NoFlo website.

Building an IoT dashboard with NASA Open MCT

Permalink - Posted on 2017-10-05 00:00

One important aspect of any Internet of Things setup is being able to collect and visualize data for analysis. Seeing trends in sensor readings over time can be useful for identifying problems, and for coming up with new ways to use the data.

We wanted an easy solution for this for the c-base IoT setup. Since the c-base backstory is that of a crashed space station, using space technology for this made sense.

OpenMCT view on c-base

NASA Open MCT is a framework for building web-based mission control tools and dashboards that they’ve released as open source. It is intended for bringing together tools and both historical and real-time data, as can be seen in their Mars Science Laboratory dashboard demo.

c-beam telemetry server

As a dashboard framework, Open MCT doesn’t really come with batteries included. You get a bunch of widgets and library functionality, but out of the box there is no integration with data sources.

However, they do provide a tutorial project for integrating data sources. We started with that, and built the cbeam-telemetry-server project which gives a very easy way to integrate Open MCT with an existing IoT setup.

With the c-beam telemetry server we combine Open MCT with the InfluxDB timeseries database and the MQTT messaging bus. This gives a “turnkey” setup for persisting and visualizing IoT information.

Getting started

The first step is to install the c-beam telemetry server. If you want to do a manual setup, first install a MQTT broker, InfluxDB and Node.js. Optionally you can also install CouchDB for sharing custom dashboard layouts between users.

Then just clone the c-beam telemetry server repo:

$ git clone https://github.com/c-base/cbeam-telemetry-server.git

Install the dependencies and build Open MCT with:

$ npm install

Now you should be able to start the service with:

$ npm start

Running with Docker

There is also an easier way to get going: we provide pre-built Docker images of the c-beam telemetry server for both x86 and ARM.

There are also docker-compose configuration files for both environments. To install and start the whole service with all its dependencies, grab the docker-compose.yml file (or the Raspberry Pi 3 version) and start with:

$ docker-compose up -d

We’re building these images as part of our continuous integration pipeline (ARM build with this recipe), so they should always be reasonably up-to-date.

Configuring your data

The next step is to create a JavaScript configuration file for your Open MCT. This is where you need to provide a “dictionary” listing all data you want your dashboard to track.

Data sets are configured like the following (configuring a temperature reading tracked for the 2nd floor):

var floor2 = new app.Dictionary('2nd floor', 'floor2');
floor2.addMeasurement('temperature', 'floor2_temperature', [
    units: 'degrees',
    format: 'float'
], {
  topic: 'bitraf/temperature/1'

You can have multiple dictionaries in the same Open MCT installation, allowing you to group related data sets. Each measurement needs to have a name and a unit.

Getting data in

In the example above we also supply a MQTT topic to read the measurement from. Now sending data to the dashboard is as easy as writing numbers to that MQTT topic. On command-line that would be done with:

$ mosquitto_pub -t bitraf/temperature/1 -m 27.3

If you were running the telemetry server when you sent that message, you should’ve seen it appear in the appropriate dashboard.

Bitraf temperature graph with Open MCT

There are MQTT libraries available for most programming languages, making it easy to connect existing systems with this dashboard.

The telemetry server is also compatible with our MsgFlo framework, meaning that you can also configure the connections between your data sources and Open MCT visually in Flowhub.

This makes it possible to utilize the existing MsgFlo libraries for implementing data sources. For example, with msgflo-arduino you can transmit sensor data from Tiva-C or NodeMcu microcontrollers to the dashboard.

Status and how you can help

The c-beam telemetry server is currently in production use in a couple of hackerspaces, and seems to run quite happily.

We’d love to get feedback from other deployments!

If you’d like to help with the project, here are couple of areas that would be great:

  • Adding tests to the project
  • Implementing downsampling of historical data
  • Figuring out ways to control IoT devices via the dashboard (so, to write to MQTT instead of just reading)

Please file issues or make pull requests to the repository.

Flowhub IoT hack weekend at c-base: buttons, sensors, the Big Switch

Permalink - Posted on 2017-07-11 00:00

Last weekend we held the c-base IoT hack weekend, focused on the Flowhub IoT platform. This was continuation from the workshop we organized at the Bitraf makerspace a week earlier. Same tools and technologies, but slightly different focus areas.

c-base is one of the world’s oldest hackerspaces and a crashed space station under Berlin. It is also one of the earliest users of MsgFlo with quite a lot of devices connected via MQTT.

Hack weekend debriefing

Hack weekend

Just like at Bitraf, the workshop aimed to add new IoT capabilities to c-base, as well as to increase the number of members who know how to make the station’s setup do new things. For this, we used three primary tools:

Internet of Things

The workshop started in Friday evening after a lecture on nuclear pulse propulsion ended in the main hall. We continued all the way to late Sunday evening with some sleep breaks in between. There is something about c-base that makes you want to work there at night.

Testing a humidity sensor

By Sunday evening, we had built and deployed 15 connected IoT devices, with five additional ones pretty far in development. You can find the source code in the c-flo repository.

Idea wall

Sensor boxes

Quite a lot of c-base was already instrumented when we started the workshop. We had details on electricity consumption, internet traffic, and more. But one thing we didn’t have was information on the physical environment at the station. To solve this, we decided to build a set of sensor boxes that we could deploy in different areas of the hackerspace.

Building sensors

The capabilities shared by all the sensor boxes we deployed were:

  • Temperature
  • Humidity
  • Motion (via passive infrared)

For some areas of interest we provided some additional sensors:

  • Sound level (for the workshop)
  • Light level (for c-lab)
  • Carbon dioxide
  • Door open/closed
  • Gravity

Workshop sensor on a breadboard

We found a set of nice little electrical boxes that provided a convenient housing for these sensor boxes. This way we were able to mount them in proper places quickly. This should also protect them from dust and other elements to some degree.

Installed weltenbaulab sensor

The Big Switch

The lights of the c-base main hall are controllable via MsgFlo, and we have a system called farbgeber to produce pleasing color schemes for any given time.

However, when there are events we need to enable manual control of all lights and sound. To make this “MsgFlo vs. IP lounge” control question clearer, we built a Big Switch to decide which controls the lights:

Big Switch in action

The switch is an old electric mains switch from an office building. It makes a satisfying sound when you turn it, and is big enough that you can see which way the setting is from across the room.

To complement the Big Switch we also added a “c-boom” button to trigger the disco mode in the main hall:

c-boom button

Info screens

One part of the IoT setup was to make statistics and announcements about c-base visible in different areas of the station. We did this by rolling out a set of displays with Raspberry Pi 3s connected to the MsgFlo MQTT environment.

Info screens ready for installing

The announcements shown on the screens range from mission critical information like station power consumption or whether the bar is open, to more fictional ones like the NoFlo-powered space station announcements.

Air lock

We also built an Android version of the info display software, which enabled deploying screens using some old donated tablets.

Info screen tablet


This was another successful workshop. Participants got to do new things, and we got lots of new IoT infrastructure installed around c-base. The Flowhub graph is definitely starting to look populated:

c-base is a graph

We also deployed NASA OpenMCT so that we get a nice overview on the station status. Our telemetry server provides MsgFlo participants that receive data via MQTT, store it in InfluxDB, and then visualize it on the dashboard:

OpenMCT view on c-base

All the c-base IoT software is available on Github:

If you’d like to have a similar IoT workshop at your company, we’re happy to organize one. Get in touch!