What is a JSON feed? Learn more

JSON Feed Viewer

Browse through the showcased feeds, or enter a feed URL below.

Now supporting RSS and Atom feeds thanks to Andrew Chilton's feed2json.org service

CURRENT FEED

Hi, I’m Nicolas

... and this is my weblog.

A feed by Nicolas Perriault

XML


En vrac manageurial™

Permalink - Posted on 2019-09-06 00:00

Un peu marre de Twitter, de ses querelles éphémères, de la négativité ambiante et du bruit blanc que ça génère depuis quelques mois, j’ai pris la décision de faire reprendre du service à mon agrégateur RSS afin de diversifier mes sources d’information quotidiennes.

La claque! Plein de contenus intéressants me passaient sous le nez depuis des années, noyés sous la horde de photos de bouffe, d’indignations ponctuelles, de spectaculaire facile, de jugements à l’emporte pièce, de gifs pas toujours rigolos du réseau soucieux bleu. Vous twittez bien, certes. Mais vous twittez TROP.

D’ailleurs j’ai complètement abandonné l’idée de partager mes lectures et réflexions personnelles insignifiantes là-bas. Ajouter du bruit au bruit ? Ne fais pas à autrui, tout ça tout ça.

J’en profite du coup pour ressusciter ce blog et vais essayer de m’astreindre à publier dessus un peu plus régulièrement, et notamment de partager quelques entrées de ma veille personnelle que j’ai humblement jugé intéressantes, comme je le pratiquais il y a… bordel, dix ans déjà.

Le créosote, ce manager performant qui détruit votre entreprise

Evidemment le cas idéal est un manager performant qui correspond à la culture. L’autre cas facile est celui qui n’est ni performant, ni en phase avec la culture. Celui qui est en phase mais pas performant peut être formé et encouragé.

Celui qui, en revanche, est performant mais pas en phase, pose un vrai dilemme.

Le créosote, ce manager performant qui détruit votre entreprise

(via mathieu)

Pour tout vous dire, j’ai jamais été à l’aise avec la notion même de management, mot derrière lequel se confondent en fonction de l’interlocuteur et de ses valeurs la gestion productiviste de matériel humain et l’accompagnement à l’organisation, la collaboration, la documentation et à la communication (qu’on appelle aussi parfois facilitation, j’aime bien).

À ce titre, le créosote ne facilite vraisemblablement la tâche à personne.

Pourquoi souvent personne ne veut mettre son organisation en mouvement ?

Pablo nous gratifie d’une série de trois billets sur les racines des freins au changement.

Si vous êtes dans une boîte et que vous rencontrez des problèmes organisationnels, d’alignement ou de conduite au changement (oui je ratisse large pour le coup), c’est une lecture nourrissante.

C’est beaucoup trop dense et riche pour que j’arrive à tirer une seule citation illustrative. Bon allez juste celle-ci pour teaser sournoisement:

Ce n’était pas censé arriver.

Ce n’était pas censé arriver.

C’est ce que devaient dire les Néandertal, les Mayas, les Indiens, les traders en 1929 ou 2008, les équipes de Kodak ou Yahoo, les chaînes françaises face à Netflix, ma banque avec N26, les taxis avec Uber, mon boulanger de son nouveau voisin chinois qui fabrique du pain sans four, cette femme quand cet homme l’a quitté, ou vice verca.

Pourquoi souvent personne ne veut mettre son organisation en mouvement ?

tumbleweed


Message de service

Permalink - Posted on 2019-09-01 00:00

Je viens de migrer ce blog sous Jekyll et le très joli theme Hydeout, j’espère que j’ai trop rien cassé.

Et puis c’est pas comme si ce coin de toile abandonné etait hautement surveillé non plus, hein.


Stateful components in Elm

Permalink - Posted on 2019-07-16 00:00

It’s often claimed that Elm developers should avoid thinking their views as stateful components. While this is indeed a general best design practice, sometimes you may want to make your views reusable (eg. across pages or projects), and if they come with a state… you end up copying and pasting a lot of things.

We recently published elm-daterange-picker, a date range picker written in Elm. It was the perfect occasion to investigate what a reasonable API for a reusable stateful view component would look like.

app demo

Many component/widget-oriented Elm packages feature a rather raw Elm Architecture (TEA) API, directly exposing Model, Msg(..), init, update and view, so you can basically import what defines an actual application and embed it within your own application.

funny meme

With these, you usually end up writing things like this:

import Counter


type alias Model =
    { counter : Counter.Model
    , value : Maybe Int
    }


type Msg
    = CounterMsg Counter.Msg


init : () -> ( Model, Cmd Msg )
init _ =
    ( { counter = Counter.init, value = Nothing }
    , Cmd.none
    )


update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
    case msg of
        CounterMsg counterMsg ->
            let
                ( newCounterModel, newCounterCommands ) =
                    Counter.update counterMsg
            in
            ( { model
                | counter = newCounterModel
                , value =
                    case counterMsg of
                        Counter.Apply value ->
                            Just value

                        _ ->
                            Nothing
              }
            , newCommands |> Cmd.map CounterMsg
            )


view : Model -> Html Msg
view model =
    div []
        [ Counter.view model.counter
            |> Html.map CounterMsg
        , text (String.fromInt model.value)
        ]

This certainly works, but let’s be frank for a minute and admit this is super verbose and not very developer friendly:

  • You need to Cmd.map and Html.map here and there
  • You need to pattern match Counter.Msg to intercept whatever event interests you…
  • … meaning Counter exposes all Msgs, which are implementation details you now rely on.

There’s another way, which Evan explained in his now deprecated elm-sortable-table package. Among the many good points he has, one idea stroke me as brilliantly simple yet effective to simplify such stateful view components API design:

State updates can be managed right from event handlers!

Let’s imagine a simple counter; what if when clicking the increment button, instead of calling onClick with some Increment message, we would call a user-provided one with the new counter state updated accordingly?

-- Counter.elm
view : (Int -> msg) -> Int -> Html msg
view toMsg counter =
    button [ onClick (toMsg (counter + 1)) ]
        [ text "increment" ]

Or if you want to use an opaque type, which is an excellent idea for maintaining the smallest API surface area:

-- Counter.elm
type State
    = State Int

view : (State -> msg) -> State -> Html msg
view toMsg (State value) =
    button [ onClick (toMsg (State (value + 1))) ]
        [ text "increment" ]

Note that as we’re dealing with a counter state, we didn’t bother having anything else than a simple Int for representing it. But you could of course have a record or anything you want.

Handling internal state update could be just creating internal and unexposed Msg and update functions:

-- Counter.elm
type State
    = State Int

type Msg
    = Dec
    | Inc

update : Msg -> Int -> Int
update msg value =
    case msg of
        Dec ->
            value - 1

        Inc ->
            value + 1

view : (State -> msg) -> State -> Html msg
view toMsg (State value) =
    div []
        [ button [ onClick (toMsg (State (update Dec value))) ]
            [ text "decrement" ]
        , button [ onClick (toMsg (State (update Inc value))) ]
            [ text "increment" ]
        ]

We should also expose helpers to retrieve (or set) values from the opaque State type:

-- Counter.elm
getValue : State -> Int
getValue (State value) =
    value

So for instance, to use this Counter component in your own application, you just have to write this:

import Counter

type alias Model =
    { counter : Counter.State
    , value : Maybe Int
    }


type Msg
    = CounterChanged Counter.State


init : () -> ( Model, Cmd Msg )
init _ =
    ( { counter = Counter.init, value = Nothing }
    , Cmd.none
    )


update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
    case msg of
        CounterChanged state ->
            ( { model | counter = state, value = Counter.getValue state }
            , Cmd.none
            )


view : Model -> Html Msg
view model =
    div []
        [ Counter.view CounterChanged model.counter
        , text (String.fromInt model.value)
        ]

Notice how our update function is dramatically simpler to write and to understand. Also, no need to import (and rely) a lot from the package module, which makes it both easier to consume & maintain thanks to to the opaque State type encapsulating implementation details.

Of course a counter wouldn’t be worth creating a package for it, though this may highlight the concept better. Don’t hesitate reading elm-daterange-picker’s source code and demo code to look at a real world application of this design principle.


Chaining HTTP requests in Elm

Permalink - Posted on 2018-02-05 00:00

Preliminary note: in this article we’ll use Elm decoders, tasks, results and leverage the Elm Architecture. If you’re not comfortable with these concepts, you may want to check their respective documentation.

Sometimes in Elm you struggle with the most basic things.

Especially when you come from a JavaScript background, where chaining HTTP requests are relatively easy thanks to Promises. Here’s a real-world example leveraging the Github public API, where we fetch a list of Github events, pick the first one and query some user information from its unique identifier.

The first request uses the https://api.github.com/events endpoint, and the retrieved JSON looks like this:

[
    {
        "id": "987654321",
        "type": "ForkEvent",
        "actor": {
            "id": 1234567,
            "login": "foobar",
        }
    },
]

I’m purposely omitting a lot of other properties from the records here, for brevity.

The second request we need to do is on the https://api.github.com/users/{login} endpoint, and its body looks like this:

{
    "id": 1234567,
    "login": "foobar",
    "name": "Foo Bar",
}

Again, I’m just displaying a few fields from the actual JSON body here.

So we basically want:

  • from a list of events, to pick the first one if any,
  • then pick its actor.login property,
  • query the user details endpoint using this value,
  • extract the user real name for that account.

Using JavaScript, that would look like this:

fetch("https://api.github.com/events")
    .then(responseA => {
        return responseA.json()
    })
    .then(events => {
        if (events.length == 0) {
            throw "No events."
        }
        const { actor : { login } } = events[0]
        return fetch(`https://api.github.com/users/${login}`)
    })
    .then(responseB => {
        return responseB.json()
    })
    .then(user => {
        if (!user.name) {
            console.log("unspecified")
        } else {
            console.log(user.name)
        }
    })
    .catch(err => {
        console.error(err)
    })

It would get a little fancier using async/await:

try {
    const responseA = await fetch("https://api.github.com/events")
    const events = await responseA.json()
    if (events.length == 0) {
        throw "No events."
    }
    const { actor: { login } } = events[0]
    const responseB = await fetch(`https://api.github.com/users/${login}`)
    const user = await responseB.json()
    if (!user.name) {
        console.log("unspecified")
    } else {
        console.log(user.name)
    }
} catch (err) {
    console.error(err)
}

This is already complicated code to read and understand, and it’s tricky to do using Elm as well. Let’s see how to achieve the same, understanding exactly what we’re doing (we’ve all blindly copied and pasted code in the past, don’t deny).

First, let’s write the two requests we need; one for fetching the list of events, the second to obtain a given user’s details from her login:

import Http
import Json.Decode as Decode

eventsRequest : Http.Request (List String)
eventsRequest =
    Http.get "https://api.github.com/events"
    (Decode.list (Decode.at [ "actor", "login" ] Decode.string))

nameRequest : String -> Http.Request String
nameRequest login =
    Http.get ("https://api.github.com/users/" ++ login)
        (Decode.at [ "name" ]
            (Decode.oneOf
                [ Decode.string
                , Decode.null "unspecified"
                ]
            )
        )

These two functions return Http.Request with the type of data they’ll retrieve and decode from the JSON body of their respective responses. nameRequest handles the case where Github users don’t have entered their full name yet, so the name field might be a null; as with the JavaScript version, we then default to "unspecified".

That’s good but now we need to execute and chain these two requests, the second one depending on the result of the first one, where we retrieve the actor.login value of the event object.

Elm is a pure language, meaning you can’t have side effects in your functions (a side effect is when functions alter things outside of their scope and use these things: an HTTP request is a huge side effect). So your functions must return something that represents a given side effect, instead of executing it within the function scope itself. The Elm runtime will be in charge of actually performing the side effect, using a Command.

In Elm, you’re usually going to use a Task to describe side effects. Tasks may succeed or fail (like Promises do in JavaScript), but they need to be turned into an [Elm command] to be actually executed.

To quote this excellent post on Tasks:

I find it helpful to think of tasks as if they were shopping lists. A shopping list contains detailed instructions of what should be fetched from the grocery store, but that doesn’t mean the shopping is done. I need to use the list while at the grocery store in order to get an end result

But why do we need to convert a Task into a command you may ask? Because a command can execute a single thing at a time, so if you need to execute multiple side effects at once, you’ll need a single task that represents all these side effects.

So basically:

  1. We first craft Http.Requests,
  2. We turn them into Tasks we can chain,
  3. We turn the resulting Task into a command,
  4. This command is executed by the runtime, and we get a result

The Http package provides Http.toTask to map an Http.Request into a Task. Let’s use that here:

fetchEvents : Task Http.Error (List String)
fetchEvents =
    eventsRequest |> Http.toTask

fetchName : String -> Task Http.Error String
fetchName login =
    nameRequest login |> Http.toTask

I created these two simple functions mostly to focus on their return types; a Task must define an error type and a result type. For example, fetchEvents being an HTTP task, it will receive an Http.Error when the task fails, and a list of strings when the task succeeds.

But dealing with HTTP errors in a granular way being out of scope of this blog post, and in order to keep things as simple and concise as possible, I’m gonna use Task.mapError to turn complex HTTP errors into their string representations:

toHttpTask : Http.Request a -> Task String a
toHttpTask request =
    request
        |> Http.toTask
        |> Task.mapError toString

fetchEvents : Task String (List String)
fetchEvents =
    toHttpTask eventsRequest

fetchName : String -> Task String String
fetchName login =
    toHttpTask (nameRequest login)

Here, toHttpTask is a helper turning an Http.Request into a Task, transforming the Http.Error complex type into a serialized, purely textual version of it: a String.

We’ll also need a function allowing to extract the very first element of a list, if any, as we did in JavaScript using events[0]. Such a function is builtin the List core module as List.head. And let’s make this function a Task too, as that will ease chaining everything together and allow us to expose an error message when the list is empty:

pickFirst : List String -> Task String String
pickFirst logins =
    case List.head logins of
        Just login ->
            Task.succeed login

        Nothing ->
            Task.fail "No events."

Note the use of Task.succeed and Task.fail, which are approximately the Elm equivalents of Promise.resolve and Promise.reject: this is how you create tasks that succeed or fail immediately.

So in order to chain all the pieces we have so far, we obviously need glue. And this glue is the Task.andThen function, which can chain our tasks this fancy way:

fetchEvents
    |> Task.andThen pickFirst
    |> Task.andThen fetchName

Neat. But wait. As we mentioned previously, Tasks are descriptions of side effects, not their actual execution. The Task.attempt function will help us doing that, by turning a Task into a Command, provided we define a Msg that will be responsible of dealing with the received result:

type Msg
    = Name (Result String String)

Result String String reflects the result of the HTTP request and shares the same type definitions for both the error (a String) and the value (the user full name, a String too). Let’s use this Msg with Task.attempt:

fetchEvents
    |> Task.andThen pickFirst
    |> Task.andThen fetchName
    |> Task.attempt Name

Here:

  • We start by fetching all the events,
  • Then if the Task succeeds, we pick the first event,
  • Then if we have one, we fetch the event’s user full name,
  • And we map the future result of this task to the Name message.

The cool thing here is that if anything fails along the chain, the chain stops and the error will be propagated down to the Name handler. No need to check errors for each operation! Yes, that looks a lot like how JavaScript Promises’ .catch works.

Now, how are we going to execute the resulting command and process the result? We need to setup the Elm Architecture and its good old update function:

module Main exposing (main)

import Html exposing (..)
import Http
import Json.Decode as Decode
import Task exposing (Task)


type alias Model =
    { name : Maybe String
    , error : String
    }

type Msg
    = Name (Result String String)

eventsRequest : Http.Request (List String)
eventsRequest =
    Http.get "https://api.github.com/events"
        (Decode.list (Decode.at [ "actor", "login" ] Decode.string))

nameRequest : String -> Http.Request String
nameRequest login =
    Http.get ("https://api.github.com/users/" ++ login)
        (Decode.at [ "name" ]
            (Decode.oneOf
                [ Decode.string
                , Decode.null "unspecified"
                ]
            )
        )

toHttpTask : Http.Request a -> Task String a
toHttpTask request =
    request
        |> Http.toTask
        |> Task.mapError toString

fetchEvents : Task String (List String)
fetchEvents =
    toHttpTask eventsRequest

fetchName : String -> Task String String
fetchName login =
    toHttpTask (nameRequest login)

pickFirst : List String -> Task String String
pickFirst events =
    case List.head events of
        Just event ->
            Task.succeed event

        Nothing ->
            Task.fail "No events."

init : ( Model, Cmd Msg )
init =
    { name = Nothing, error = "" }
        ! [ fetchEvents
                |> Task.andThen pickFirst
                |> Task.andThen fetchName
                |> Task.attempt Name
          ]

update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
    case msg of
        Name (Ok name) ->
            { model | name = Just name } ! []

        Name (Err error) ->
            { model | error = error } ! []

view : Model -> Html Msg
view model =
    div []
        [ if model.error /= "" then
            div []
                [ h4 [] [ text "Error encountered" ]
                , pre [] [ text model.error ]
                ]
          else
            text ""
        , p [] [ text <| Maybe.withDefault "Fetching..." model.name ]
        ]

main =
    Html.program
        { init = init
        , update = update
        , subscriptions = always Sub.none
        , view = view
        }

That’s for sure more code than with the JavaScript example, but don’t forget that the Elm version renders HTML, not just logs in the console, and that the JavaScript code could be refactored to look a lot like the Elm version. Also the Elm version is fully typed and safeguarded against unforeseen problems, which makes a huge difference when your application grows.

As always, an Ellie is publicly available so you can play around with the code.


Reduce and the Ferris wheel metaphor

Permalink - Posted on 2018-01-26 00:00

This is a short anecddote about how I approached teaching things to someone else.

I recently had to introduce some Elm concepts to a coworker who had some experience with React and Redux. One of these concepts was List.foldl, a reduction function which exists in many languages, specifically as Array#reduce in JavaScript.

The coworker was struggling to understand the whole concept, so I tried to use a metaphor; I came with the idea of a Ferris wheel next to a lake, with someone in one of its basket holding a bucket, filling the basket with water from the lake everytime the basket is back to the ground.

Gwydion M. Williams - View of Wuxi from Lake Tai

Yeah, I know.

So as he was starring at me like I was a crazy person, and as I knew he did use React and Redux in the past, I told him it was like the reducer functions he probably used already.

We started writing a standard Redux reducer in plain js:

function reducer(state, action) {
    switch(action.type) {
        case "EMPTY": {
            return init
        }
        case "ADD_WATER": {
            return {...state, water: state.water + 1}
        }
    }
}

He was like “oh yeah, I know that”. Good! We could use that function iteratively:

// Step by step state building
const init = {water: 0}
let state = init
state = reducer(state, {type: "ADD_WATER"})
state = reducer(state, {type: "EMPTY"})
state = reducer(state, {type: "ADD_WATER"})
state = reducer(state, {type: "ADD_WATER"})

console.log(state) // {water: 2}

Or using Array#reduce:

// Using Array#reduce and an array of actions
const actions = [
    {type: "ADD_WATER"},
    {type: "EMPTY"},
    {type: "ADD_WATER"},
    {type: "ADD_WATER"},
]

const init = {water: 0}
const state = actions.reduce(reducer, init)
console.log(state) // {water: 2}

So I could use the Ferris wheel metaphor again:

  • state represents the state of the wheel basket (and the quantity of water in it)
  • init is the initial state of the wheel basket (it contains no water yet)
  • actions are the list of operations to proceed each time the basket reaches the ground again (here, filling the basket with water from the lake, sometimes emptying the basket)

For the records, yes my coworker was still very oddly looking at me.

We moved on and decided to reimplement the same thing in Elm, using foldl. Its type signature is:

foldl : (a -> b -> b) -> b -> List a -> b

Wow, that looks complicated, especially when you’re new to Elm.

In Elm, type signatures separate each function arguments and the return value with an arrow (->); so, let’s decompose the one for foldl:

  • (a -> b -> b), the first argument, means we want a function, taking two arguments typed a and b and returning a b. That sounds a lot like our reducer function in JavaScript! If so, a is an action, and b a state.
  • the next argument, typed as b, is the initial state we start reducing our list of actions from.
  • the next argument, List a, is our list of actions.
  • And all this must return a b, hence a new state. We have the exact definition of what we’re after.

Actually our own use of foldl would have been much more obvious if we initially saw this, replacing a by Action and b by State:

foldl : (Action -> State -> State) -> State -> List Action -> State

Note: if you’re still struggling with these a and bs, you should probably read a little about Generic Types.

Our resulting minimalistic implementation was:

type Action
    = AddWater
    | Empty

type alias State =
    { water : Int }

init : State
init =
    { water = 0 }

actions : List Action
actions =
    [ AddWater
    , Empty
    , AddWater
    , AddWater
    ]

reducer : Action -> State -> State
reducer action state =
    case action of
        Empty ->
            init

        AddWater ->
            { state | water = state.water + 1 }

main =
    div []
        [ -- Step by step state building, renders { water = 2 }
          init
            |> reducer AddWater
            |> reducer Empty
            |> reducer AddWater
            |> reducer AddWater
            |> toString >> text

        -- Using List.foldl, renders { water = 2 }
        , List.foldl reducer init actions
            |> toString >> text
        ]

We quickly drafted this on Ellie. It’s not graphically impressive, but it works.

That was it, it was more obvious how to map things my coworker already knew to something new to him, while in fact it was actually exactly the same thing, expressed slightly differently from a syntax perspective.

We also expanded that the Elm Architecture and the traditional update function was basically a projection of foldl, Action being usually named Msg and State Model.

The funny thing being, Redux design itself was initially inspired by the Elm Architecture!

In conclusion, here are quick takeaways when facing something difficult to understand:

  • start with finding a metaphor, even a silly one; that helps summarizing the problem, expressing your goal and ensure you get the big picture of it;
  • slice the problem down to the smallest understandable chunks you can, then move to the next larger one when you’re done;
  • always try to map what you’re trying to learn to things you’ve already learned; past experiences are good tools for that.


A short introduction to Elm

Permalink - Posted on 2017-01-25 00:00

A short presentation about Elm I gave at MontpellierJS.

Image courtesy of Fabrice Bentz


From OSX to Ubuntu

Permalink - Posted on 2017-01-08 00:00

A year earlier I decided to switch from OSX to Ubuntu, so now is a good time to make a little retrospective. TL;DR: Linux now offers a pleasant desktop user experience and there’s no way back for me.

"Thou Shall Migrate", says a funny penguin. Credits: Hamish Irvine

I was a Linux user 10 years ago but moved to being a Mac one, mainly because I was tired of maintaining an often broken system (hello xorg.conf), and Apple had quite an appealing offer at the time: a well-maintained Unix platform matching beautiful hardware, sought-after UX, access to editor apps like Photoshop and MS Office, so best of both worlds.

To be frank, I was a happy Apple user in the early years, then the shine started to fade; messing up your system after upgrades became more frequent, Apple apps grown more and more bloated and intrusive (hello iTunes), UX started turning Kafkaian at times, too often I was finding myself tweaking and repairing stuff from the terminal…

The trigger was pulled when Apple announced their 2015 MacBook line, with strange connectivity decisions like having a unique port for everything and using dongles: meh. If even their top notch hardware started to turn weird, it was probably time to look elsewhere. And now I see their latest MBP line with the Esc key removed (so you can’t escape anymore, haha), I’m kinda comforted in my decision.

Meanwhile, since I’ve joined Mozilla and the Storage team, I could see many colleagues happily using Linux, and it didn’t feel like they were struggling with anything particular. Oddly enough, it seemed they were capable of working efficiently, both for professional and personal stuff.

I finally took the plunge and ordered a Lenovo X1 Carbon, then started my journey to being a Linux user again.

Choosing a distro

I didn’t debate this for days, I installed the latest available Ubuntu right away as it was the distribution I was using before moving to OSX (I even contributed to a book on it!). I was used to Debian-based systems and knew Ubuntu was still acclaimed for its ease of use and great hardware support. I wasn’t disappointed as on the X1 everything was recognized and operational right after the installation, including wifi, bluetooth and external display.

I was greeted with the Unity desktop, which was disturbing as I was a Gnome user back in the days. Up to a point I installed the latter, though in its version 3 flavor, which was also new to me.

I like Gnome3. It’s simple, configurable and made me feel productive fast. Though out of bad luck or skills and time to spend investigating, a few things were not working properly: fonts were huge in some apps and normal in others, external display couldn’t be configured to a different resolution and dpi ratio than my laptop’s, things like that. After a few weeks, I switched back to Unity, and I’m still happily using it today as it has nicely solved all the issues I had with Gnome (which I still like a lot though).

The pain points when coming from OSX

Let’s be honest, the Apple keyboard French layout is utter crap, but as many things involving muscle memory, once you’re used to it, it’s a pain in the ass to readapt to anything else. I struggled for something like three weeks fighting old habits in this area, then eventually got through.

Last, a bunch of OSX apps are not available on Linux, so you have to find their equivalent, when they exist. The good news is, most often they do.

The Web is your App Store

What also changed in last ten years is the explosion of the Web as an application platform. While LibreOffice and The Gimp are decent alternatives to MS Office and Photoshop, you now have access to many similarly scoped Web apps like Google Docs and Pixlr, provided you’re connected to the Internet. Just ensure using a modern Web browser like Firefox, which luckily ships by default in Ubuntu.

For example I use IRCCloud for IRC, as Mozilla has a corporate account there. The cool thing is it acts as a bouncer so it keeps track of messages when you go offline, and has a nice Android app which syncs.

When the Web isn’t enough

There is obviously lots of things Web apps can’t do, like searching your local files or updating your system. And let’s admit that sometimes for specific tasks native apps are still more efficient and better integrated (by definition) than what the Web has to offer.

I was a hardcore Alfred.app user on OSX. On Linux there’s quite no strict equivalent though Unity Dash, Albert or synapse can cover most of its coolness.

Unity Dash in action
Unity Dash in action
synapse in action
synapse in action

If you use the text shortcuts feature of Alfred (or if you use TextExpander), you might be interested in AutoKey as well.

File manager

I couldn’t spot any obvious usability difference between Nautilus and the OSX Finder, but I mostly use their basic features anyway.

Nautilus in action

To emulate Finder’s QuickLook, sushi does a proper job.

Code editors

The switch shouldn’t be too hard as most popular editors are available on Linux: Sublime Text, Atom, VSCode and obviously vim and emacs.

Terminal

I was using iTerm2 on OSX, so I was happy to find out about Terminator, which also supports tiling & split panes.

Task switching, exposé

Unity provides a classic alt+tab switcher and an Exposé-style overview, just like OSX.

Exposé in Unity

Photography

I’ve been a super hardcore Lightroom user and lover, but eventually found Darktable and am perfectly happy with it now. Its ergonomics take a little while to get used to though.

DarkTable in action

If you want to get an idea of what kind of results it can produce, take a look at my NYC gallery on 500px, fwiw all the pictures have been processed using DarkTable.

Sample picture processed with DarkTable

Disclaimer: if you find these pictures boring or ugly, it’s probably me and not DarkTable.

For things like cropping & scaling images, The Gimp does an okay job.

For organizing & managing a gallery, ShotWell seems to be what many people use nowadays, though I’m personally happy just using my file manager somehow.

Games

Ah the good old days when you only had Gnome Solitaire to have a little fun on Linux. Nowadays even Steam is available for Linux, with more and more titles available. That should get you covered for a little while.

If it doesn’t, PlayOnLinux allows running Windows games on Wine. Most of the time, it works just fine.

Battle.net via PlayOnLinux

Music & Sound

I’ve been a Spotify user & customer for years, and am very happy with the Linux version of its client.

The Spotify Linux client

I’m using a Bose Mini SoundLink over bluetooth and never had any issues pairing and using it. To be 100% honest, PulseAudio crashed a few times but the system has most often been able to recover and enable sound again without any specific intervention from me.

Byt the way, it’s not always easy to switch between audio sources; Sound Switcher Indicator really helps by adding a dedicated menu in the top bar:

The Sound Switcher Indicator in action

Video editing

I’m definitely not an expert in the field but have sometimes needs for quickly crafting short movies for friends and family. kdenlive has just done its job perfectly so far for me.

Password manager

While studying password managers for work lately, I’ve stumbled upon Enpass, it’s a good equivalent of 1Password which doesn’t have a Linux version of their app. Enpass has extensions for the most common browsers, and can sync to Dropbox or Owncloud among other cloud services.

Enpass in action

Cloud backup & syncing

I was using Dropbox and CrashPlan on OSX, guess what? I’m using them on Linux too.

A few other niceties

ScreenCloud

ScreenCloud allows making screenshots, annotate them and export them to different targets like the filesystem or online image hosting providers like imgur or DropBox.

ScreenCloud

Clipboard manager

Diodon is a simple yet efficient clipboard manager, exposing a convenient menu in the system top bar.

RedShift

If you know f.lux, RedShift is an alternative to it for Linux. The program will adapt the tint of your displays to the amount of light at this specific time of the day. Recommended.

Caffeine

Caffeine is a status bar application able to temporarily prevent the activation of both the screensaver and the sleep powersaving mode. Most useful when watching movies.

So, is Linux ready for the desktop?

For me, the answer is yes.

Updates

I’ve been asked several questions by email, IRC, twitter and in the HN thread about this post, here are some answers in a random order.

What is the exact model of your laptop?

Lenovo X1 Carbon 3rd Gen.

Do you have issues with acpi/sleep?

No.

How’s battery life?

Obviously worse than a MacBook (where controlled hardware & drivers are heavily optimized for that purpose), but not that bad tbh. I can work for max 5 hours straight, though if I start compiling stuff (hello gecko) it gets really bad.

Does the fingerprint reader work out of the box?

No, I tried to use Fingerprint-GUI but it was so unstable that I removed it. I’m easy typing passphrases anyway.

Did you try Krita? It’s a mix between Photoshop and Paint

That sounds rather ambitious, and I didn’t feel like installing all these KDE/Qt packages for trying it out. From the captures I could find online, it looks like a great option though.

There’s a Linux version of f.lux!

Yeah. Also I’ve learned that f.lux was inspired by Redshift and not the other way around. Point taken, thanks.

DarkTable doesn’t do X, Y and Z while Lightroom does

DarkTable is free. Also, its keystones-based perspective correction module is much better than anything I could find for LightRoom.

But yeah, overall LightRoom is way ahead, and if Adobe was kind enough to port it to Linux I’d buy and use it in a heartbeat.

DarkTable can crop and scale images too

Do you often fire DarkTable to edit a screenshot?

Arch is so much better

Good for you! Diversity is nice.

You said you contributed to a book on Ubuntu, you’re biased towards Apple

Haha, nice try.

What GTK/unity theme are you using?

I’m using Vivacious Dark in its graphite variant.

What side launcher are you using in the screenshots?

It’s the standard Unity one with the icon borders removed.


Kinto, une alternative libre à Parse et Firebase

Permalink - Posted on 2016-11-29 00:00

Conférence donnée au Capitole du Libre le 29 novembre 2016.

À l’heure où beaucoup s’interrogent sur le contrôle que l’on a sur nos données applicatives, le projet Kinto tente d’apporter sa vision d’une solution générique à la problématique.

Il était une fois votre énième idée de projet d’application révolutionnaire uberisant la disruption personnelle, avec les sempiternelles questions qui vont avec:

  • Où stocker les données ?
  • Comment les maintenir de façon sécurisée ?
  • Comment les synchroniser, les répliquer ?

De nombreux services existent dans le cloud, peu répondent favorablement à l’ensemble de ces interrogations. Kinto est une base de données JSON auto-hébergeable disposant d’une API REST simple à utiliser, permettant d’administrer et synchroniser les données de façon sécurisée.

La vidéo de la présentation réapparaitra peut-être un jour magiquement ici.


Résiliation, piège à cons

Permalink - Posted on 2013-12-30 00:00

Ça fait plusieurs fois que j’ai cette conversation, quelles sont les limites acceptables pour obtenir la fidélité d’un utilisateur ?

Prenons l’exemple de l’abonnement Mediapart.

Autant vous pouvez facilement souscrire un abonnement via un simple formulaire en ligne, autant pour vous désabonner, boum, courrier postal obligatoire (ça me rassure de pas être le seul aigri dans l’histoire). Pas de résiliation en ligne donc, sur un journal en ligne. Évidemment, cette démarche freine pas mal de procrastinateurs, qui se consolent généralement en contribuant au financement d’un organe de presse indépendant pour 9 euros par mois (sinon, la Croix Rouge c’est pas mal non plus hein).

D’aucuns comprendront que la stratégie mise en œuvre ici est de profiter de la fainéantise de nombreuses personnes quant aux démarches administratives même les plus simples (encore qu’associer la Poste et simplicité font d’emblée emmerger quelques doutes légitimes).

(ah bon d’accord.)

La dernière fois que j’ai eu affaire à ce type de procédé, c’est avec Canal +. Ces gens-là utilisent la même technique en la poussant de façon plus extrême encore : si vous disposez d’une Freebox, vous pouvez activer un abonnement Canal + depuis l’interface de la box, et hop, accès à Canal + directement opérationnel, waouh. Par contre, pour résilier… Lettre recommandé avec A/R a minima deux mois avant la date anniversaire de la souscription (vous avez bien lu). Encore une fois, les procrastinateurs, rétifs au stratif et handicapés du calendrier de tous bords en seront pour leur frais.

Souvent, lorsque j’évoque ces deux exemples, certaines personnes — parmi lequelles des gens bien sous tous rapports — me rétorquent que #lesgens n’ont qu’à pas procrastiner, à être organisés, que finalement c’est plutôt bien fait pour eux. Que ça leur apprendra. Qu’ils avaient qu’à pas être aussi faibles, qu’à pas être aussi cons, quoi. Darwin for the win.

D’autres me disent que tant que c’est légal, ils n’y voient aucun souci, voire même un business model plutôt malin, Tintin. Les victimes n’avaient qu’à faire attention, à bien lire les CGV de 180 pages, tout ça. La faiblesse comme fond de commerce, c’est pas trop la classe ?

Et ça me fait quand même un tout petit peu de peine. Ces interlocuteurs me disent en gros que profiter d’une faiblesse, même infime, de l’utilisateur est une démarche parfaitement normale, particulièrement lorsqu’elle peut garantir des rentrées d’argent confortables et régulières à peu de frais. Voire même de faire des économies substantielles sur les coûts d’infrastructure, vu que ces gens-là n’utiliseront vraisemblablement peu ou plus les services en question. Voire enfin que peu importent les moyens mis en œuvre pour financer une cause que l’on estime soi-même juste.

Bon, vous pensez que j’exagère, là hein ? Que j’en rajoute, qu’il n’y a pas autant de “victimes” de ce type de procédés que ça, que c’est anecdotique voire psychiatrique ?

Prenez 10 minutes pour faire un petit exercice rigolo : épluchez vos derniers relevés bancaires et pointez systématiquement les dépenses de type prélèvement automatique. Vous pourriez vous surprendre vous-même.

C’est ce qui m’est arrivé dernièrement en tout cas, dans une démarche d’abaissement de mon empreinte économique pour arriver à vivre décemment avec moins d’argent, de façon plus raisonnable et raisonnée. C’est fou ce qu’on peut souscrire comme conneries, d’un simple clic… et oublier.

Mais revenons-en à nos moutons ; que se passe t-il dans le cas de Mediapart ou de Canal + quand je découvre les valeurs impliquées derrières le procédé de résiliation ? Peu importe si je me désabonne quand même, c’est peut-être le moins pire des effets collatéraux dans l’histoire… Ce qui se passe de grave pour ces entités est que la confiance est rompue. Dans le cas de Canal +, j’avoues qu’elle n’a jamais été bien haute de toute façon, mais dans le cas Mediapart, quelle claque. Et j’en parle. On parle bien plus volontiers des claques que l’on prend autour de soi.

Par pitié, si vous gérez un business, quel qu’il soit, essayez de réaliser, de comprendre que prendre les utilisateurs pour des cons n’est pas une stratégie de développement durable sur le long terme. Respectez-les autant que vous vous respectez. Et si vous ne vous respectez pas, foutez-leur la paix.

Mise à jour :

Hadrien de Boisset me signale par email :

En droit français, la signature d’un contrat papier n’est pas nécessaire pour que ce contrat existe (pour des contrats de moins de 1500€). Une fois le contrat conclu, il s’impose aux parties au même titre que la loi. La résiliation, qui met fin unilatéralement à un contrat, doit en revanche toujours être demandée sur papier, quelle que soit la valeur du contrat, c’est une obligation légale qui vise à protéger les fournisseurs contre d’éventuels procès futurs pour non respect du contrat en leur faisant parvenir une preuve de la volonté du client d’y mettre fin.

Ma réaction :

J’admets bien volontiers ne pas m’être renseigné sur le cadre légal entourant la contractualisation commerciale en droit français ; merci pour ces précisions ! Malgré tout, ces dispositions me surprennent énormément du fait de l’absence de symétrie entre souscription et résiliation, et je ne peux m’empêcher d’y voir une forme de légitimation institutionnelle à faciliter l’acte d’engagement versus la cessation dudit engagement. Car il me semble tout aussi délicat pour un fournisseur d’être attaqué sur un processus de souscription abusif ou frauduleux que sur celui de la résiliation.

Pourrait-on y voir une forme de facilitation à la commercialisation, voire d’une “difficultation” du processus inverse ? Je comprends que les chiffres de la croissance tiennent énormément à cœur à nos politiques, mais tout de même !</complot>

En tout état de cause, une stricte symétrie des moyens minimum autorisés pour souscrire ou résilier un contrat me semblerait personnellement aller de soi. Même si je me doute que — comme souvent j’ai l’impression — certains abus ont fait aboutir au cadre légal actuel…


Functional JavaScript for crawling the Web

Permalink - Posted on 2013-12-01 00:00

I’ve been giving JavaScript & CasperJS training sessions lately, and was amazed how few people are aware of the Functional Programming capabilities of JavaScript. Many couldn’t see obvious usage of these in Web development, which is a bit of a shame if you ask me.

Let’s take things like map and reduce from the Array prototype:

function square(x) {
  return x * x;
}

function sum(x, y) {
  return x + y;
}

[1, 2, 3].map(square).reduce(sum)
// 14

I’ve been hearing a few times things like:

Well yeah that’s cool, but I don’t do maths, I’m a Web developer.

And each time it turns me a little sad.

Disclaimer

As we’re programming language hipsters, in this article we’ll use ES6 short function syntax which has landed a few weeks ago in Firefox Nightlies and eases a lot writing code in the functional style:

var square = x => x * x;
var sum = (x, y) => x + y;

[1, 2, 3].map(square).reduce(sum)
// 14

We’ll also use other ES6 features as well because, you know, today is our future already.

This article contents will also probably hurt some people feelings, probably because there’s a lot to hate in there when you come from a pure OOP landscape. Please think of this article as an exercise of thought instead of yet another new JavaScript tutorial™.

Crawling the DOM using FP

Take this DOM fragment featuring a good ol’ data table as an example:

<table>
  <thead>
    <tr>
      <th>Country</th>
      <th>Population (M)</th>
      <th>GNP (B)</th>
    </tr>
  </thead>
  <tbody>
    <tr><td>Belgium</td><td>11.162</td><td>419</td></tr>
    <tr><td>France</td><td>63.820</td><td>2246</td></tr>
    <tr><td>Germany</td><td>80.640</td><td>3139</td></tr>
    <tr><td>Greece</td><td>10.758</td><td>298</td></tr>
    <tr><td>Italy</td><td>59.789</td><td>1871</td></tr>
    <tr><td>Netherlands</td><td>16.795</td><td>713</td></tr>
    <tr><td>Poland</td><td>38.548</td><td>782</td></tr>
    <tr><td>Portugal</td><td>10.609</td><td>252</td></tr>
    <tr><td>United Kingdom</td><td>64.231</td><td>2290</td></tr>
    <tr><td>Spain</td><td>46.958</td><td>1432</td></tr>
  </tbody>
</table>

To map the country names to a regular array of strings:

var rows = document.querySelectorAll("tbody tr");
[].map.call(rows, row => row.querySelector("td").textContent);
// ["Belgium", "France", "Germany", "Greece", "Italy", …]

It worked: our map operation transformed a list of DOM table row elements to the text value of their very first cell. Well, it feels like we could probably enhance the code ergonomics a bit here.

Note: If you wonder why do we use [].map.call in lieu of just calling map from the element list prototype, that’s because NodeList doesn’t implement the Array interface… Yeah, I know.

As an illustrative exercise, let’s write our own map function to make a passed iterable always exposing the Array interface; also, let’s invert the order of passed args to ease further composability (more on this later):

const map = (fn, iterable) => [].map.call(iterable, fn);

Note: we declare map as a constant to avoid any accidental mess. Also, I don’t see obvious reasons for a function to be mutated here.

So we can write:

var rows = document.querySelectorAll("tbody tr");
map(row => row.querySelector("td").textContent, rows);
// ["Belgium", "France", "Germany", "Greece", "Italy", …]

As a side note, this map implementation also works for strings:

map(x => x.toUpperCase(), "foo");
// ["F", "O", "O"]

We can also write a tiny abstraction on top of querySelectorAll, again to ensure further composability:

const nodes = (sel, root) => (root || document).querySelectorAll(sel);

So now we can write:

var rows = nodes("tbody tr");
map(node => nodes("td", node)[0].textContent, rows);
// ["Belgium", "France", "Germany", "Greece", "Italy", …]

Hmm, the operations being performed within the function passed to map (finding a first child node, getting an element property value) sound like things we’re most likely to do many times while extracting information from the DOM. And then we’d probably want better code semantics as well.

For starters, let’s create a first() function for finding the first element out of a collection:

const first = iterable => iterable[0];
// first([1, 2, 3]) => 1

Our example becomes:

map(node => first(nodes("td", node)).textContent, rows);
// ["Belgium", "France", "Germany", "Greece", "Italy", …]

In the same vein, we could use a prop() higher order function — basically a function returning a function — one more time to create a reusable & composable property getter (we’ll get back to this, read on):

const prop = name => object => object[name];
// const getFoo = prop("foo");
// getFoo({foo: "bar"}) => "bar"

If you struggle understanding how this works, this is how we would write prop using current function syntax:

function prop(name) {
  return function(object) {
    return object[name];
  };
}

Let’s use our new property getter generator:

const getText = prop("textContent");

map(node => getText(first(nodes("td", node))), rows);
// ["Belgium", "France", "Germany", "Greece", "Italy", …]

Now, how about having a generic for finding a node’s child elements from a selector? Let’s do this:

const finder = selector => root => nodes(selector, root);

const findCells = finder("td");
findCells(document.querySelector("table")).length
// 30

Don’t panic, again this is how we’d write it using standard function declaration syntax:

function finder(selector) {
  return function(root) {
    return nodes(selector, root);
  }
}

Let’s use it:

const getText = prop("textContent");
const findCells = finder("td");

map(node => getText(first(findCells(node))), rows);
// ["Belgium", "France", "Germany", "Greece", "Italy", …]

At this point, you may be wondering how this is possibly improving code readability and maintainability… Now is the perfect time to use function composition (you waited for it), to aggregate & chain minimal bits of reusable code.

Note: If you’re familiar with the UNIX philosophy, that’s exactly the same approach as when using the pipe operator:

 $ ls -la | awk '{print $2}' | grep pattern | wc -l

Let’s create a sequence function to help composing functions sequentially:

const sequence = function() {
  return [].reduce.call(arguments, function(comp, fn) {
    return () => comp(fn.apply(null, arguments));
  });
};

This one is a bit complicated; it basically takes all functions passed as arguments and returns a new function capable of processing them sequencially, passing to each the result of the previous execution:

const squarePlus2 = sequence(x => 2 + x, x => x * x);
squarePlus2(4);
// 4 * 4 + 2 => 18 => Aspirine is in the bathroom

In classic notation without using a sequence, that would be the equivalent of:

function plus2(x) {
    return 2 + x;
}

function square(x) {
    return x * x;
}

function squarePlus2(x) {
    return plus2(square(x));
}

squarePlus2(4);
// 18

By the way, sequence is a very good place to use ES6 Rest Arguments which have also landed recently in Gecko; let’s rewrite it accordingly:

const sequence = function(...fns) {
  return fns.reduce(function(comp, fn) {
    return (...args) => comp(fn.apply(null, args));
  });
};

Let’s use it in our little DOM crawling example:

const getText = prop("textContent");
const findCells = finder("td");

map(sequence(getText, first, findCells), rows)
// ["Belgium", "France", "Germany", "Greece", "Italy", …]

What I like the most about the FP style is that it actually describes fairly well what’s going to happen; you can almost read the code as you’d read plain English (caveat: don’t do this at family dinners).

Also you may want to have the functions passed in the opposite order, ala UNIX pipes, which usually enhances legibility a bit for seasonned functional programmers; let’s create a compose function for doing just that:

const compose = (...fns) => sequence.apply(null, fns.reverse());

map(compose(findCells, first, getText), rows);
// ["Belgium", "France", "Germany", "Greece", "Italy", …]

Wait, is this really better?

As a side note, one may argue that:

map(sequence(getText, first, findCells), rows);

Is not much really better than:

map(row => getText(first(findCells(row))), rows);

Though the composed approach is probably more likely to scale when adding many more functions to the stack:

a(b(c(d(e(f(g(h(foo))))))));
sequence(a, b, c, d, e, f, g, h)(foo);

Last, a composed function is itself composable by essence, and that’s probably a killer feature:

map(sequence(getText, sequence(first, findCells)), rows);
// ["Belgium", "France", "Germany", "Greece", "Italy", …]

Which something like this:

var crawler = new Crawler("table");
crawler.findCells("tbody tr").first().getText();

Is hardly likely to offer.

A few more examples

To compute the total population of listed countries:

const reduce = (fn, init, iterable) => [].reduce.call(iterable, fn, init);
const second = (iterable) => iterable[1];
const sum = (x, y) => x + y;

var populations = map(compose(findCells, second, getText, parseFloat),
                      rows);
reduce(sum, 0, populations);
// 403.31000000000006

To generate a JSON export of the whole table data:

const partial = (fn, ...r) => (...a) => fn.apply(null, r.concat(a))
const nth = n => (iterable) => iterable[n - 1];
const third = nth(3);
const getTexts = partial(map, getText);
const asObject = (data) => ({
  name:       first(data),
  population: parseFloat(second(data)),
  gnp:        parseFloat(third(data))
});

var countries = map(compose(findCells, getTexts, asObject), rows);
JSON.stringify(countries);
// "[{"name":"Belgium","population":11.162,"gnp":419}, …

To compute the global average GNP per capita for these countries:

const perCapita = c => ({name: c.name, perCapita: c.gnp / c.population});

var gnpPerCapita = map(perCapita, countries);
JSON.stringify(gnpPerCapita);
// "[{"name":"Belgium","perCapita":37.5380756136893}, …

To filter countries having more than n€ of GNP per capita, sort them by descending order and export the result as JSON:

const select = (fn, iterable) => [].filter.call(iterable, fn)
const sort = (fn, iterable) => [].sort.call(iterable, fn);

const sortDesc = partial(sort, (a, b) => a.perCapita > b.perCapita ? -1 : 1);
const healthy = partial(select, c => c.perCapita > 38);

const healthyCountries = compose(healthy, sortDesc);
JSON.stringify(healthyCountries(gnpPerCapita));
// "[{"name":"Netherlands","perCapita":42.45311104495385}, …

I could probably go on and on, but you get the picture. This post is not to claim that the FP approach is the best of all in JavaScript, but that it certainly has its advantages. Feel free to play with these concepts for a while to make your mind, eventually :)

If you’re interested in Functional JavaScript, I suggest the following resources:

  • Pure, functional JavaScript, an inspiring talk from Christian Johansen;
  • JavaScript Allongé, an online book which covers most of its aspects in a very comprehensive style (you should buy it);
  • List Out of Lambda, a blog post from Steve Losh where he reinvents lists purely using functions in JavaScript (!);
  • If you’re hooked with FP (yay!), have a look at Clojure and its port targetting the JavaScript platform, ClojureScript.

If you’re interested in ECMAScript 6, here are some good links to read about: