What is a JSON feed? Learn more

JSON Feed Viewer

Browse through the showcased feeds, or enter a feed URL below.

Now supporting RSS and Atom feeds thanks to Andrew Chilton's feed2json.org service

CURRENT FEED

Henri Bergius

Hacker and an occasional adventurer. Author of Create.js and NoFlo, founder of Flowhub UG. Decoupling software, one piece at a time. This blog tells the story of that.

JSON


Managing a developer shell with Docker

Permalink - Posted on 2018-04-19 00:00

When I’m not in Flowhub-land, I’m used to developing software in a quite customized command line based development environment. Like for many, the cornerstones of this for me are vim and tmux.

As customization increases, it becomes important to have a way to manage that and distribute it across the different computers. For years, I’ve used a dotfiles repository on GitHub together with GNU Stow for this.

However, this still means I have to install all the software and tools before I can have my environment up and running.

Using Docker

Docker is a tool for building and running software in a containerized fashion. Recently Tiago gave me the inspiration to use Docker not only for distributing production software, but also for actually running my development environment.

Taking ideas from his setup, I built upon my existing dotfiles and built a reusable developer shell container.

With this, I only need Docker installed on a machine, and then I’m two commands away from having my normal development environment:

$ docker volume create workstation
$ docker run -v ~/Projects:/projects -v workstation:/root -v ~/.ssh:/keys --name workstation --rm -it bergie/shell

Here’s how it looks in action:

Working on NoFlo inside Docker shell

Once I update my Docker setup (for example to install or upgrade some tool), I can get the latest version on a machine with:

$ docker pull bergie/shell

At least in theory this should give me a fully identical working environment regardless of the host machine. Linux VPS, a MacBook, or a Windows machine should all be able to run this. And soon, this should also work out of the box on Chromebooks.

Setting this up

The basics are pretty simple. I already had a repository for my dotfiles, so I only needed to write a Dockerfile to install and set up all my software.

To make things even easier, I configured Travis so that every time I push some change to the dotfiles repository, it will create and publish a new container image.

Further development ideas

So far this setup seems to work pretty well. However, here are some ideas for further improvements:

  • ARM build: Sometimes I need to work on Raspberry Pis. It might be nice to cross-compile an ARM version of the same setup
  • Key management: Currently I create new SSH keys for each host machine, and then upload them to the relevant places. With this setup I could use a USB stick, or maybe even a Yubikey to manage them
  • Application authentication: Since the Docker image is public, it doesn’t come with any secrets built in. This means I still need to authenticate with tools like NPM and Travis. It might be interesting to manage these together with my SSH keys
  • SSH host: with some tweaking it might be possible to run the same container on cloud services. Then I’d need a way to get my SSH public keys there and start an SSH server

If you have ideas on how to best implement the above, please get in touch.


MicroFlo and IoT: measuring air quality

Permalink - Posted on 2018-02-26 00:00

Fine particulate matter is a serious issue in many cities around the world. In Europe, it is estimated to cause 400.000 premature deaths per year. European Union has published standards on the matter, and warned several countries that haven’t been able to reach the safe limits.

Germany saw the highest number of deaths attributable to all air pollution sources, at 80,767. It was followed by the United Kingdom (64,351) and France (63,798). These are also the most populated countries in Europe. (source: DW)

The associated health issues don’t come cheap: 20 billion euros per year on health costs alone.

“To reduce this figure we need member states to comply with the emissions limits which they have agreed to,” Schinas said. “If this is not the case the Commission as guardian of the (founding EU) treaty will have to take appropriate action,” he added. (source: phys.org)

One part of solving this issue is better data. Government-run measurement stations are quite sparse, and — in some countries — their published results can be unreliable. To solve this, Open Knowledge Foundation Germany started the luftdaten.info project to crowdsource air pollution data around the world.

Last saturday we hosted a luftdaten.info workshop at c-base, and used the opportunity to build and deploy some particulate matter sensors. While luftdaten.info has a great build guide and we used their parts list, we decided to go with a custom firmware built with MicroFlo and integrated with the existing IoT network at c-base.

Building an air quality sensor

MicroFlo on ESP8266

MicroFlo is a flow-based programming runtime targeting microcontrollers. Just like NoFlo graphs run inside a browser or Node.js, the MicroFlo graphs run on an Arduino or other compatible device. The result of a MicroFlo build is a firmware that can be flashed on a microcontroller, and which can be live-programmed using tools like Flowhub.

ESP8266 is an Arduino-compatible microcontroller with integrated WiFi chip. This means any sensors or actuators on the device can easily connect to other systems, like we do with lots of different sensors already at c-base.

ESP8266 sensor in preparation

MicroFlo recently added a feature where Wifi-enabled MicroFlo devices can automatically connect with a MQTT message queue and expose their in/outports as queues there. This makes MicroFlo on an ESP8266 a fully-qualified MsgFlo participant.

Building the firmware

We wanted to build a firmware that would periodically read both the DHT22 temperature and humidity sensor, and the SDS011 fine particulate sensor, even out the readings with a running median, and then send the values out at a specified interval. MicroFlo’s core library already provided most of the building blocks, but we had to write custom components for dealing with the sensor hardware.

Thankfully Arduino libraries existed for both sensors, and this was just a matter of wrapping those to the MicroFlo component interface.

After the components were done, we could build the firmware as a Flowhub graph:

MicroFlo luftdaten graph

To verify the build we enabled Travis CI where we build the firmware both against the MicroFlo Arduino and Linux targets. The Arduino one is there to verify that the build works with all the required libraries, and the Linux build we can use for test automation with fbp-spec.

To flash the actual devices you need the Arduino IDE and Node.js. Then use MicroFlo to generate the .ino file, and flash that to the device with the IDE. WiFi and MQTT settings can be tweaked in the secrets.h and config.h files.

Sensor deployment

The recommended weatherproofing solution for these sensors is quite straightforward: place the hardware in a piece of drainage pipe with the ends turned downwards.

Since we had two sensors, we decided to install one in the patio, and the other in the c-base main hall:

Particulate matter sensor in c-base main hall

Working with the sensor data

Once the sensor devices had been flashed, they became available in our MsgFlo setup and could be connected with other systems:

Particulate matter sensor in c-base main hall

In our case, we wanted to do two things with the data:

The first one was just a matter of adding couple of configuration lines to our OpenMCT server. For the latter, I built a simple Python component.

Our sensors have been tracking for a couple of days now. The public data can be seen in the madavi service:

Readings from the c-base outdoor sensor

We’ve submitted our sensor for inclusion in the luftdaten.info database, and hopefully soon there will be another covered area in the Berlin air quality map:

luftdaten.info Berlin map

If you’d like to build your own air quality sensor, the instructions on luftdaten.info are pretty comperehensive. Get the parts from your local electronics store or AliExpress, connect them together, flash the firmware, and be part of the public effort to track and improve air quality!

Our MicroFlo firmware is a great alternative if you want to do further analysis of the data yourself, or simply want to get the data on MQTT.


asComponent: turn any JavaScript function into a NoFlo component

Permalink - Posted on 2018-02-23 00:00

Version 1.1 of NoFlo shipped this week with a new convenient way to write components. With the noflo.asComponent helper you can turn any JavaScript function into a well-behaved NoFlo component with minimal boilerplate.

Usage of noflo.asComponent is quite simple:

const noflo = require('noflo');
exports.getComponent = () => noflo.asComponent(Math.random);

In this case we have a function that doesn’t take arguments. We detect this, and produce a component with a single “bang” port for invoking the function:

Math.random as component

You can also amend the component with helpful information like a textual description and and icon:

const noflo = require('noflo');
exports.getComponent = () => noflo.asComponent(Math.random, {
  description: 'Generate a random number',
  icon: 'random',
});

Math.random with custom icon

Multiple inputs

The example above was with a function that does not take any arguments. With functions that accept arguments, each of them becomes an input port.

const noflo = require('noflo');

function findItemsWithId(items, id) {
  return items.filter((item) => item.id === id);
}

exports.getComponent = () => noflo.asComponent(findItemsWithId);

asComponent and multiple inports

The function will be called when both input ports have a packet available.

Output handling

The asComponent helper handles three types of functions:

  • Regular synchronous functions: return value gets sent to out. Thrown errors get sent to error
  • Functions returning a Promise: resolved promises get sent to out, rejected promises to error
  • Functions taking a Node.js style asynchronous callback: err argument to callback gets sent to error, result gets sent to out

With this, it is quite easy to write wrappers for asynchronous operations. For example, to call an external REST API with the Fetch API:

const noflo = require('noflo');

function getFlowhubStats() {
  return fetch('https://api.flowhub.io/stats')
    .then((result) => result.json());
}

exports.getComponent = () => noflo.asComponent(getFlowhubStats);

How that you have this component, it is quick to do a graph utilizing it (open in Flowhub):

Example graph with asynchronous asComponent

Here we get the BODY element of the browser runtime. When that has been loaded, we trigger the fetch component above. If the request succeeds, we process it through a string template to write a quick report to the page. If it fails, we grab the error message and write that.

Making the components discoverable

The default location for a NoFlo component is components/ComponentName.js inside your project folder. Add your new components to this folder, and NoFlo will be able to run them.

If you’re using Flowhub, you can also write the components in the integrated code editor, and they will be sent to the runtime.

We’ve already updated the hosted NoFlo browser runtime to 1.1, so you can get started with this new component API right away.

Advanced components

In many ways, asComponent is the inverse of the asCallback embedding feature we introduced a year ago: asComponent turns a regular JavaScript function into a NoFlo component; asCallback turns a NoFlo component (or graph) into a regular JavaScript function.

If you need to work with more complex firing patterns, like combining streams or having control ports, you can of course still write regular Process API components.

The regular component API is quite a bit more verbose, but at the same time gives you full access to NoFlo APIs for dealing with manually controlled preconditions, state management, and creating generators.

However, thinking about the hundreds of NoFlo components out there, most of them could be written much more simply with asComponent. This will hopefully make the process of developing NoFlo programs a lot more straightforward.

Read more NoFlo component documentation and asComponent API docs.


Publish your data on the BIG IoT marketplace

Permalink - Posted on 2018-02-12 00:00

When building IoT systems, it is often useful to have access to data from the outside world to amend the information your sensors give you. For example, indoor temperature and energy usage measurements will be a lot more useful if there is information on the outside weather to correlate with.

Thanks to the open data movement, there are many data sets available. However, many of these are hard to discover or available in obscure formats.

The BIG IoT marketplace

BIG IoT is an EU-funded research project to make datasets easier to share and discover between organizations. With it there is a common semantic standard for how datasets are served, and a centralized marketplace for discovering and subscribing to data offerings.

  • For data providers this means they can focus on providing correct information, and let the marketplace handle API tokens, discoverability, and — for commercial datasets — billing
  • For data consumers there is a single place and a single API to access multiple datasets. No need to handle different Terms of Usage or different API conventions

As an example, if you’re building a car navigation application, you can use BIG IoT to get access to multiple providers of routing services, traffic delay information, or parking spots. If a dataset comes online in a new city, it’ll automatically work with your application. No need for contract negotiations, just a query to find matching providers on-demand.

Flowhub and BIG IoT

Last summer Flowhub was one of the companies accepted into the BIG IoT first open call. In it, we received some funding to make it possible to publish data from Flowhub and NoFlo on the marketplace. In this video I’m talking about the project:

In the project we built three things:

Creating a data provider

While it is easy enough to use the BIG IoT Java library to publish datasets, the Flowhub integration we built it makes it even easier. You need your data source available on a message queue, a web API, or maybe a timeseries database. And then you need NoFlo and the flowhub-bigiot-bridge library.

The basic building block is the Provider component. This creates a Node.js application server to serve your datasets, and registers them to the BIG IoT marketplace.

NoFlo BIG IoT Provider

What you need to do is to describe your data offering. For this, you can use the CreateOffering component. You can use IIPs to categorize the data, and then a set of CreateDatatype components to describe the input and output structure your offering uses.

NoFlo BIG IoT Offering config

Finally, the request and response ports of the Provider need to be hooked to your data source. The request outport will send packets with whatever input data your subscribers provided, and you need to send the resulting output data to the response port.

Request-response loop with BIG IoT Provider

For real-world deployment, the Flowhub BIG IoT bridge repository also includes examples on how to test your offerings, and how to build and deploy them with Docker.

Here’s how a full setup with two different parking datasets looks like:

NoFlo BIG IoT parking provider

If you’re participating in the Bosch Connected World hackathon in Berlin next week, we’ll be there with the BIG IoT team to help projects to utilize the BIG IoT datasets.

This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No 688038.


My blog, the 2017 edition

Permalink - Posted on 2017-12-14 00:00

I guess every five years is a good cadence for blog redesigns. This year’s edition started as a rewrite of the technical implementation, but I ended up also updating the visuals. Here I’ll go through the design goals, and how I met them.

More robust and secure delivery

This year the web has been strongly turning towards encryption. While my site doesn’t contain any interactive elements, using HTTPS still makes it harder for malicious parties to track and modify the contents people read.

For the past five years, my blog has been hosted on GitHub Pages. While otherwise that has been a pretty robust solution, they sadly don’t support SSL for custom domains. A common workaround would be to utilize Cloudflare as a HTTPS proxy, but that only works if you let them manage your domain. Since bergie.iki.fi is a subdomain, that was off the cards.

Instead, what I did was turn towards Amazon Web Services. I used Amazon Certificate Manager with my iki subdomain to get an SSL certificate, and utilized Travis CI to build the Jekyll site and upload to S3.

From there the site updates are served using Amazon CloudFront CDN, routed using Route53.

With this, I only need to push new changes to this site’s GitHub repository, and robots will take care of the rest, from producing the HTML pages to distributing them via a global content delivery network.

And, I get the friendly green lock icon.

SSL certificate for bergie.iki.fi

Easier image rescaling

I moved the site from Midgard CMS to the Jekyll static site generator in 2012. At that point, images were stored in the same GitHub repository alongside the textual contents.

However, the sheer volume of pictures accumulated on this site over the years made the repository quite unwieldy, and so I moved them to Amazon S3 couple of years ago.

This made working with different sizes of images a bit more unwieldy, as I’d have to produce the different variants locally and upload them separately.

Now, with the new redesign I built an Amazon Lambda function to resize images on-demand. My solution is implemented in NoFlo, roughly following the ideas from this tutorial but utilizing the excellent noflo-sharp library.

This is a topic I should write about in more detail, but it turns out NoFlo works really well with Amazon Lambda. You can use any Node.js NoFlo graph there by simply wrapping it using the asCallback embedding API.

The end result is that I only need to upload original size images to S3 using some tool (NoFlo, s3cmd, AWS console, or the nice DropShare app), and I can get different sizes by tweaking the URL.

I could have gone with ImgFlo, but right now I need only rescaling, and running the whole GIMP engine felt like an overkill.

New visuals

After the technical side of the blog revamp was done, I turned towards the design aspects. I wanted more color, and also to benefit from the features of the modern web. This meant that performance-hindering things like Bootstrap, jQuery, and Google Fonts were out, since nowadays you can do pretty nice sites with pure CSS alone.

In addition to the better CDN setup, the redesign improved the site’s PageSpeed score. And I think it looks pretty good.

Here’s the front page:

2017 edition of bergie.iki.fi

For reference, here is how the 2012 edition looked like:

2012 edition of bergie.iki.fi

I also spent a bit of time to make sure the site looks nice on both smartphones and tablets, since those are the devices most people use to browse the web these days.

Here is how the site looks like on different devices, courtesy of Am I Responsive

2017 front page

2017 article page

Better content discoverability

This site has over 1000 articles, and it is easy to lost in those volumes. To make it easier to discover content, I implemented a related posts feature.

I originally wanted to use Jekyll’s Latent Semantic Indexing feature, but with this amount of content that simply blows up.

Instead, I ended up building my own hacky implementation based on categorization and similar keywords in posts using Liquid templates. This makes full site builds a bit slow, but the results seem quite good:

Related posts to the NoFlo 1.0 announcement

Staying up to date

While most people probably discover content now via Twitter or Facebook (both of which I occasionally share my things in, in addition to places like Reddit or Hacker News as needed), RSS is still the underpinning of receiving blog updates.

For this, the site is available as both:

Feel free to add one of them to the news aggregator of your choice!

I also supply /now page for current activities, inspired by the NowNowNow movement. Here is how Derek Sivers described the idea:

People often ask me what I’m doing now.

Each time I would type out a reply, describing where I’m at, what I’m focused on, and what I’m not.

So earlier this year I added a /now page to my site: https://sivers.org/now

A simple link. Easy to remember. Easy to type.

It’s a nice reminder for myself, when I’m feeling unfocused. A public declaration of priorities.

Previous redesigns

I’ve been running this site since 1997. Here is what I’ve written about some of the previous redesigns:

I hope you enjoy the new design! Let me know what you think.


Get ready for NoFlo 1.0

Permalink - Posted on 2017-11-02 00:00

After six years of work, and bunch of different projects done with NoFlo, we’re finally ready for the big 1.0. The two primary pull requests for the 1.0.0 cycle landed today, and so it is time to talk about how to prepare for it.

tl;dr If your project runs with NoFlo 0.8 without deprecation warnings, you should be ready for NoFlo 1.0

ES6 first

The primary difference between NoFlo 0.8 and 1.0 is that now we’re shipping it as ES6 code utilizing features like classes and arrow functions.

Now that all modern browsers support ES6 out of the box, and Node.js 8 is the long-term supported release, it should be generally safe to use ES6 as-is.

If you need to support older browsers, Node.js versions, or maybe PhantomJS, it is of course possible to compile the NoFlo codebase into ES5 using Babel.

We recommend new components to be written in ES6 instead of CoffeeScript.

Easier webpack builds

It has been possible to build NoFlo projects for browsers since 2013. Last year we switched to webpack as the module bundler.

However, at that stage there was still quite a lot of configuration magic happening inside grunt-noflo-browser. This turned out to be sub-optimal since it made integrating NoFlo into existing project build setups difficult.

Last week we extracted the difficult parts out of the Grunt plugin, and released the noflo-component-loader webpack loader. With this, you can generate a configured NoFlo component loader in any webpack build. See this example.

In addition to generating the component loader, your NoFlo browser project may also need two other loaders, depending how your NoFlo graphs are built: json-loader for JSON graphs, and fbp-loader for graphs defined in the .fbp DSL.

Removed APIs

There were several old NoFlo APIs that we marked as deprecated in NoFlo 0.8. In that series, usage of those APIs logged warnings. Now in 1.0 the deprecated APIs are completely removed, giving us a lighter, smaller codebase to maintain.

Here is a list of the primary API removals and the suggested migration strategy:

  • noflo.AsyncComponent class: use WirePattern or Process API instead
  • noflo.ArrayPort class: use InPort/OutPort with addressable: true instead
  • noflo.Port class: use InPort/OutPort instead
  • noflo.helpers.MapComponent function: use WirePattern or Process API instead
  • noflo.helpers.WirePattern legacy mode: now WirePattern always uses Process API internally
  • noflo.helpers.WirePattern synchronous mode: use async: true and callback
  • noflo.helpers.MultiError function: send errors via callback or error port
  • noflo.InPort process callback: use Process API
  • noflo.InPort handle callback: use Process API
  • noflo.InPort receive method: use Process API getX methods
  • noflo.InPort contains method: use Process API hasX methods
  • Subgraph EXPORTS mechanism: disambiguate with INPORT/OUTPORT

The easiest way to verify whether your project is compatible is to run it with NoFlo 0.8.

You can also make usage of deprecated APIs throw errors instead of just logging them by setting the NOFLO_FATAL_DEPRECATED environment variable. In browser applications you can set the same flag to window.

Scopes

Scopes are a flow isolation mechanism that was introduced in NoFlo 0.8. With scopes, you can run multiple simultaneous flows through a NoFlo network without a risk of data leaking from one scope to another.

The primary use case for scope isolation is building things like web API servers, where you want to isolate the processing of each HTTP request from each other safely, while reusing a single NoFlo graph.

Scope isolation is handled automatically for you when using Process API or WirePattern. If you want to manipulate scopes, the noflo-packets library provides components for this.

NoFlo in/outports can also be set as scoped: false to support getting out of scopes.

asCallback and async/await

noflo.asCallback provides an easy way to expose NoFlo graphs to normal JavaScript consumers. The produced function uses the standard Node.js callback mechanism, meaning that you can easily make it return promises with Node.js util.promisify or Bluebird. After this your NoFlo graph can be run via normal async/await.

Component libraries

There are hundreds of ready-made NoFlo components available on NPM. By now, most of these have been adapted to work with NoFlo 0.8.

Once 1.0 ships, we’ll try to be as quick as possible to update all of them to run with it. In the meanwhile, it is possible to use npm shrinkwrap to force them to depend on NoFlo 1.0.

If you’re relying on a library that uses deprecated APIs, or hasn’t otherwise been updated yet, please file an issue in the GitHub repo of that library.

This pull request for noflo-gravatar is a great example of how to implement all the modernization recommendations below in an existing component library.

Recommendations for new projects

This post has mostly covered how to adapt existing NoFlo projects for 1.0. How about new projects? Here are some recommendations:

  • While NoFlo projects have traditionally been written in CoffeeScript, for new projects we recommend using ES6. In particular, follow the AirBnB ES6 guidelines
  • Use fbp-spec for test automation
  • Use NPM scripts instead of Grunt for building and testing
  • Make browser builds with webpack utilizing noflo-component-loader
  • Use Process API when writing components
  • If you expose any library functionality, provide an index file using noflo.asCallback for non-NoFlo consumers

The BIG IoT Node.js bridge is a recent project that follows these guidelines if you want to see an example in action.

There is also a project tutorial available on the NoFlo website.


Building an IoT dashboard with NASA Open MCT

Permalink - Posted on 2017-10-05 00:00

One important aspect of any Internet of Things setup is being able to collect and visualize data for analysis. Seeing trends in sensor readings over time can be useful for identifying problems, and for coming up with new ways to use the data.

We wanted an easy solution for this for the c-base IoT setup. Since the c-base backstory is that of a crashed space station, using space technology for this made sense.

OpenMCT view on c-base

NASA Open MCT is a framework for building web-based mission control tools and dashboards that they’ve released as open source. It is intended for bringing together tools and both historical and real-time data, as can be seen in their Mars Science Laboratory dashboard demo.

c-beam telemetry server

As a dashboard framework, Open MCT doesn’t really come with batteries included. You get a bunch of widgets and library functionality, but out of the box there is no integration with data sources.

However, they do provide a tutorial project for integrating data sources. We started with that, and built the cbeam-telemetry-server project which gives a very easy way to integrate Open MCT with an existing IoT setup.

With the c-beam telemetry server we combine Open MCT with the InfluxDB timeseries database and the MQTT messaging bus. This gives a “turnkey” setup for persisting and visualizing IoT information.

Getting started

The first step is to install the c-beam telemetry server. If you want to do a manual setup, first install a MQTT broker, InfluxDB and Node.js. Optionally you can also install CouchDB for sharing custom dashboard layouts between users.

Then just clone the c-beam telemetry server repo:

$ git clone https://github.com/c-base/cbeam-telemetry-server.git

Install the dependencies and build Open MCT with:

$ npm install

Now you should be able to start the service with:

$ npm start

Running with Docker

There is also an easier way to get going: we provide pre-built Docker images of the c-beam telemetry server for both x86 and ARM.

There are also docker-compose configuration files for both environments. To install and start the whole service with all its dependencies, grab the docker-compose.yml file (or the Raspberry Pi 3 version) and start with:

$ docker-compose up -d

We’re building these images as part of our continuous integration pipeline (ARM build with this recipe), so they should always be reasonably up-to-date.

Configuring your data

The next step is to create a JavaScript configuration file for your Open MCT. This is where you need to provide a “dictionary” listing all data you want your dashboard to track.

Data sets are configured like the following (configuring a temperature reading tracked for the 2nd floor):

var floor2 = new app.Dictionary('2nd floor', 'floor2');
floor2.addMeasurement('temperature', 'floor2_temperature', [
  {
    units: 'degrees',
    format: 'float'
  }
], {
  topic: 'bitraf/temperature/1'
});

You can have multiple dictionaries in the same Open MCT installation, allowing you to group related data sets. Each measurement needs to have a name and a unit.

Getting data in

In the example above we also supply a MQTT topic to read the measurement from. Now sending data to the dashboard is as easy as writing numbers to that MQTT topic. On command-line that would be done with:

$ mosquitto_pub -t bitraf/temperature/1 -m 27.3

If you were running the telemetry server when you sent that message, you should’ve seen it appear in the appropriate dashboard.

Bitraf temperature graph with Open MCT

There are MQTT libraries available for most programming languages, making it easy to connect existing systems with this dashboard.

The telemetry server is also compatible with our MsgFlo framework, meaning that you can also configure the connections between your data sources and Open MCT visually in Flowhub.

This makes it possible to utilize the existing MsgFlo libraries for implementing data sources. For example, with msgflo-arduino you can transmit sensor data from Tiva-C or NodeMcu microcontrollers to the dashboard.

Status and how you can help

The c-beam telemetry server is currently in production use in a couple of hackerspaces, and seems to run quite happily.

We’d love to get feedback from other deployments!

If you’d like to help with the project, here are couple of areas that would be great:

  • Adding tests to the project
  • Implementing downsampling of historical data
  • Figuring out ways to control IoT devices via the dashboard (so, to write to MQTT instead of just reading)

Please file issues or make pull requests to the repository.


Flowhub IoT hack weekend at c-base: buttons, sensors, the Big Switch

Permalink - Posted on 2017-07-11 00:00

Last weekend we held the c-base IoT hack weekend, focused on the Flowhub IoT platform. This was continuation from the workshop we organized at the Bitraf makerspace a week earlier. Same tools and technologies, but slightly different focus areas.

c-base is one of the world’s oldest hackerspaces and a crashed space station under Berlin. It is also one of the earliest users of MsgFlo with quite a lot of devices connected via MQTT.

Hack weekend debriefing

Hack weekend

Just like at Bitraf, the workshop aimed to add new IoT capabilities to c-base, as well as to increase the number of members who know how to make the station’s setup do new things. For this, we used three primary tools:

Internet of Things

The workshop started in Friday evening after a lecture on nuclear pulse propulsion ended in the main hall. We continued all the way to late Sunday evening with some sleep breaks in between. There is something about c-base that makes you want to work there at night.

Testing a humidity sensor

By Sunday evening, we had built and deployed 15 connected IoT devices, with five additional ones pretty far in development. You can find the source code in the c-flo repository.

Idea wall

Sensor boxes

Quite a lot of c-base was already instrumented when we started the workshop. We had details on electricity consumption, internet traffic, and more. But one thing we didn’t have was information on the physical environment at the station. To solve this, we decided to build a set of sensor boxes that we could deploy in different areas of the hackerspace.

Building sensors

The capabilities shared by all the sensor boxes we deployed were:

  • Temperature
  • Humidity
  • Motion (via passive infrared)

For some areas of interest we provided some additional sensors:

  • Sound level (for the workshop)
  • Light level (for c-lab)
  • Carbon dioxide
  • Door open/closed
  • Gravity

Workshop sensor on a breadboard

We found a set of nice little electrical boxes that provided a convenient housing for these sensor boxes. This way we were able to mount them in proper places quickly. This should also protect them from dust and other elements to some degree.

Installed weltenbaulab sensor

The Big Switch

The lights of the c-base main hall are controllable via MsgFlo, and we have a system called farbgeber to produce pleasing color schemes for any given time.

However, when there are events we need to enable manual control of all lights and sound. To make this “MsgFlo vs. IP lounge” control question clearer, we built a Big Switch to decide which controls the lights:

Big Switch in action

The switch is an old electric mains switch from an office building. It makes a satisfying sound when you turn it, and is big enough that you can see which way the setting is from across the room.

To complement the Big Switch we also added a “c-boom” button to trigger the disco mode in the main hall:

c-boom button

Info screens

One part of the IoT setup was to make statistics and announcements about c-base visible in different areas of the station. We did this by rolling out a set of displays with Raspberry Pi 3s connected to the MsgFlo MQTT environment.

Info screens ready for installing

The announcements shown on the screens range from mission critical information like station power consumption or whether the bar is open, to more fictional ones like the NoFlo-powered space station announcements.

Air lock

We also built an Android version of the info display software, which enabled deploying screens using some old donated tablets.

Info screen tablet

Conclusions

This was another successful workshop. Participants got to do new things, and we got lots of new IoT infrastructure installed around c-base. The Flowhub graph is definitely starting to look populated:

c-base is a graph

We also deployed NASA OpenMCT so that we get a nice overview on the station status. Our telemetry server provides MsgFlo participants that receive data via MQTT, store it in InfluxDB, and then visualize it on the dashboard:

OpenMCT view on c-base

All the c-base IoT software is available on Github:


If you’d like to have a similar IoT workshop at your company, we’re happy to organize one. Get in touch!


Flowhub IoT workshop at Bitraf: sensors, access control, and more

Permalink - Posted on 2017-07-04 00:00

I just got back to Berlin from the Bitraf IoT hackathon we organized in Oslo, Norway. This hackathon was the first of two IoT workshops around MsgFlo and Flowhub IoT. The second will be held at c-base in Berlin this coming weekend.

Bitraf and the existing IoT setup

Bitraf is a large non-profit makerspace in the center of Oslo. It provides co-working facilities, as well as labs and a large selection of computer controlled tools for building things. Members have 24/7 access to the space, and are provided with everything needed for CNC milling, laser cutting, 3D-printing and more.

The space uses the Flowhub IoT stack of MsgFlo and Mosquitto for business-critical things like the door locks that members can open with their smartphone.

Bitraf lock system

In addition to access control, they also had various environmental sensors available on the MQTT network.

With the workshop, our aim was to utilize these existing things more, as well as to add new IoT capabilities. And of course to increase the number of Bitraf members with the knowledge to work with the MsgFlo IoT setup.

Preparations

Being a makerspace, Bitraf already had everything needed for the physical side of the workshop — tons of sensors, WiFi-enabled microcontrollers, tools for building cases and mounting solutions. So the workshop preparations mostly focused on the software side of things.

The primary tools for the workshop were:

To help visualize the data coming from the sensors people were building, I integrated the NASA OpenMCT dashboard with MsgFlo and InfluxDB time series database. This setup is available at the cbeam-telemetry-server project.

OpenMCT at Bitraf

This gave us a way to send data from any interesting sensors in the IoT network to a dashboard and visualize it. Down the line the persisted data can also be interesting for further analysis or machine learning.

Kick-off session

We started the workshop with a quick intro session about Flowhub, MsgFlo, and MQTT development. There is unfortunately no video, but the slides are available:

After the intro, we did a round of all attendees to see what skills people already had, and what they were interested in learning. Then we started collecting ideas what to work on.

Bitraf IoT ideas

People picked their ideas, and the project work started.

Idea session at Bitraf IoT

I’d like to highlight couple of the projects.

New sensors for the makerspace

Teams at work

Building new sensors was a major part of the workshop. There were several projects, all built on top of msgflo-arduino and the ESP8266 microcontroller:

Working on a motion sensor

There was also a project to automatically open and close windows, but this one didn’t get completed over the weekend. You can follow the progress in the altF4 GitHub repo.

Tool locking

All hackerspaces have the problem that people borrow tools and then don’t return them when finished. This means that the next person needing the tool will have to spend time searching for it.

To solve this, the team designed a system that enabled tools to be locked to a wall, with a web interface where members can “check out” a tool they want to use. This way the system constantly knows what tools are in their right places, and which tools are in use, and by who.

You can see the tool lock system in action in this demo video:

Source code and schematics: https://github.com/einsmein/bitraf-thelock.

After the hackathon

Before my flight out, we sat down with Jon to review how things went. In general, I think it is clear the event was a success — people got to learn and try new things, and all projects except one were completed during the two days.

Our unofficial goal was to double the number of nodes in the Bitraf Flowhub graph, and I think we succeeded in this:

Bitraf as a graph

Here are couple of comments from the attendees:

Really fun and informative. The development pipeline also seems complete. Made it a lot easier for beginner to get started.

this was a very fantastic hackathon! Lots of interesting things to learn, very enthusiastic participants, great stewardship and we actually got quite a few projects finished. Well done everbody.

In general the development tools we provided worked well. Everybody was able to run the full Flowhub IoT environment on their own machines using the Docker setup we provided. And apart from a couple of corner cases, msgflo-arduino was easy to get going on the NodeMCUs.

With these two, everybody could easily wire up some sensors and see their data in both Flowhub and the OpenMCT dashboard. From the local setup going to production was just a matter of switching the MQTT broker configuration.


If you’d like to have a similar IoT workshop at your company, we’re happy to organize one. Get in touch!


Two hackathons in a week: thoughts on NoFlo and MsgFlo

Permalink - Posted on 2017-06-19 00:00

Last week I participated in two hackathons, events where a group of strangers would form a team for two or three days and build a product prototype. In the end all teams pitch their prototypes, and the best ones would be given some prizes.

Hackathons are typically organized to get feedback from developers on some new API or platform. Sometimes they’re also organized as a recruitment opportunity.

Apart from the free beer and camaraderie, I like going to hackathons since they’re a great way to battle test the developer tools I build. The time from idea to having to have a running prototype is short, people are used to different ways of working and different toolkits.

If our tools and flow-based programming work as intended, they should be ideal for these kind of situations.

Minds + Machines hackathon and Electrocute

Minds + Machines hackathon was held on a boat and focused on decarbonizing power and manufacturing industries. The main platform to work with was Predix, GE’s PaaS service.

Team Electrocute

Our project was Electrocute, a machine learning system for forecasting power consumption in a changing climate.

1.5°C is the global warming target set by the Paris Agreement. How will this affect energy consumption? What kind of generator assets should utilities deploy to meet these targets? When and how much renevable energy can be utilized?

The changing climate poses many questions to utilities. With Electrocute’s forecasting suite power companies can have accurate answers, on-demand.

Electrocute forecasts

The system was built with a NoFlo web API server talking over MsgFlo with a Python machine learning backend. We also built a frontend where users could see the energy usage forecasts on a heatmap.

NoFlo-Xpress in action

Unfortunately we didn’t win this one.

Recoding Aviation and Skillport

Recoding Aviation was held at hub:raum and focused on improving the air travel experience through usage of open APIs offered by the various participating airports.

Team Skillport

Skillport was our project to make long layovers more bearable by connecting people who’re stuck at the airport at the same time.

Long layovers suck. But there is ONE thing amazing about them: You are surrounded by highly skilled people with interesting stories from all over the world. It sometimes happens that you meet someone randomly - we all have a story like that. But usually we are too shy and lazy to communicate and see how we could create a valuable interaction. You never know if the other person feels the same.

We built a mobile app that turns airports into a networking, cultural exchange and knowledge sharing hub. Users tell each other through the app that they are available to meet and what value they can bring to an interaction.

The app connected with a J2EE API service that then communicated over MsgFlo with NoFlo microservices doing all the interactions with social and airport APIs. We also did some data enrichment in NoFlo to make smart recommendations on meeting venues.

MsgFlo in action

This time our project went well with the judges and we were selected as the winner of the Life in between airports challenge. I’m looking forward to the helicopter ride over Berlin!

Category winners

Skillport also won a space at hub:raum, so this might not be the last you’ll hear of the project…

Lessons learned

Benefits of a message queue architecture

I’ve written before on why to use message queues for microservices, but that post focused more on the benefits for real-life production usage.

The problems and tasks for a system architecture in a hackathon are different. Since the time is short, you want to enable people to work in parallel as much as possible without stepping on each other’s toes. Since people in the team come from different backgrounds, you want to enable a heterogeneous, polyglot architecture where each developer can use the tools they’re most productive with.

MsgFlo is by its nature very suitable for this. Components can be written in any language that supports the message queue used, and we have convenience libraries for many of them. The discovery mechanism makes new microservices appear on the Flowhub graph as soon as they start, enabling services to be wired together quickly.

Mock early, mock often

Mocks are a useful way to provide a microservice to the other team members even before the real implementation is ready.

For example in the GE Predix hackathon, we knew the machine learning team would need quite a bit of time to build their model. Until that point we ran their microservice with a simple msgflo-python component that just gave random() as the forecast.

This way everybody else was able to work with the real interface from the get-go. When the learning model was ready we just replaced that Python service, and everything was live.

Mocks can be useful also in situations where you have a misbehaving third-party API.

Don’t forget tests

While shooting for a full test coverage is probably not realistic within the time constraints of a hackathon, it still makes sense to have at least some “happy path” tests. When you’re working with multiple developers each building a different parts of the service, interface tests serve a dual purpose:

  • They show the other team members how to use your service
  • They verify that your service actually does what it is supposed to

And if you’re using a continuous integration tool like Travis, the tests will help you catch any breakages quickly, and also ensure the services work on a clean installation.

For a message queue architecture, fbp-spec is a great tool for writing and running these interface tests.

Talk with the API providers

The reason API and platform providers organize these events is to get feedback. As a developer that works with tons of different APIs, this is a great opportunity to make sure your ideas for improvement are heard.

On the flip side, this usually also means the APIs are in a pretty early stage, and you may be the first one using them in a real-world project. When the inevitable bugs arise, it is a good to have a channel of communications open with the API provider on site so you can get them resolved or worked around quickly.

Room for improvement

The downside of the NoFlo and MsgFlo stack is that there is still quite a bit of a learning curve. NoFlo documentation is now in a reasonable place, but with Flowhub and MsgFlo we have tons of work ahead on improving the onboarding experience.

Right now it is easy to work with if somebody sets it up properly first, but getting there is a bit tricky. Fixing this will be crucial for enabling others to benefit from these tools as well.