What is a JSON feed? Learn more

JSON Feed Viewer

Browse through the showcased feeds, or enter a feed URL below.

Now supporting RSS and Atom feeds thanks to Andrew Chilton's feed2json.org service

CURRENT FEED

Maxime Vaillancourt

JSON


Automatically labeling GitHub notification emails with Gmail filters

Permalink - Posted on 2020-10-15 00:00

A fair share of my waking hours involves communicating with other people on GitHub to make sure we’re solving the right problems, and that we’re solving them the right way.

As a result, I receive many email notifications about various things that happen on there: direct requests to review a particular piece of code, feedback on pull requests I’ve opened, pull requests merged by their authors, people directly mentioning our username in a comment, issues closed by their authors, etc. I receive hundreds of emails every single week.

Now here’s the thing: some of these emails are more time-sensitive and/or actionable than others. We should probably address direct requests to review a particular piece of code first, since someone is explicitly asking for our attention and not responding promptly would likely prevent them from shipping something swiftly. Conversely, I’ll want to delete (or at least deprioritize) email notifications communicating that a given pull request was merged by its author - there’s no actionable to derive from it, so in the bin it goes.

However, by default, all these email notifications arrive in our inboxes with the same perceived level of importance, which makes it difficult to identify what I should address next.

This post presents a solution to this problem: using Gmail filters, we can automatically add labels to GitHub notification emails based on their content. This solution takes less than 10 minutes to implement, and the long-term return on investment is quite appreciable.

Here’s how my inbox looks like with automatic labeling set up:

Pretty neat, right? Thanks to these labels, I’m able to quickly parse through emails and reach inbox zero every day.

Let’s see how to implement this solution with Gmail.

1. Download the XML filters template

Start by saving the following XML filters template to a file on your device:

<?xml version='1.0' encoding='UTF-8'?>
<feed xmlns='http://www.w3.org/2005/Atom' xmlns:apps='http://schemas.google.com/apps/2006'>
  <title>GitHub filters</title>
  <entry>
    <category term='filter'></category>
    <content></content>
    <apps:property name='hasTheWord' value='Merged into'/>
    <apps:property name='label' value='Merged'/>
    <apps:property name='sizeOperator' value='s_sl'/>
    <apps:property name='sizeUnit' value='s_smb'/>
  </entry>
  <entry>
    <category term='filter'></category>
    <content></content>
    <apps:property name='hasTheWord' value='&quot;@<YOUR_GITHUB_USERNAME_HERE>&quot;'/>
    <apps:property name='label' value='Mention'/>
    <apps:property name='sizeOperator' value='s_sl'/>
    <apps:property name='sizeUnit' value='s_smb'/>
  </entry>
  <entry>
    <category term='filter'></category>
    <content></content>
    <apps:property name='hasTheWord' value='because you authored the thread'/>
    <apps:property name='label' value='Author'/>
    <apps:property name='sizeOperator' value='s_sl'/>
    <apps:property name='sizeUnit' value='s_smb'/>
  </entry>
  <entry>
    <category term='filter'></category>
    <content></content>
    <apps:property name='hasTheWord' value='modified the open/close state'/>
    <apps:property name='label' value='Reopened'/>
    <apps:property name='sizeOperator' value='s_sl'/>
    <apps:property name='sizeUnit' value='s_smb'/>
  </entry>
  <entry>
    <category term='filter'></category>
    <content></content>
    <apps:property name='hasTheWord' value='&quot;requested review from @<YOUR_GITHUB_TEAM_NAME_HERE>&quot;'/>
    <apps:property name='label' value='Team review request'/>
    <apps:property name='sizeOperator' value='s_sl'/>
    <apps:property name='sizeUnit' value='s_smb'/>
  </entry>
  <entry>
    <category term='filter'></category>
    <content></content>
    <apps:property name='hasTheWord' value='&quot;because you are on a team that was mentioned&quot; AND &quot;You can view, comment on, or merge this pull request online at&quot;'/>
    <apps:property name='label' value='Team review request'/>
    <apps:property name='sizeOperator' value='s_sl'/>
    <apps:property name='sizeUnit' value='s_smb'/>
  </entry>
  <entry>
    <category term='filter'></category>
    <content></content>
    <apps:property name='from' value='github'/>
    <apps:property name='hasTheWord' value='&quot;Closed \#&quot; &quot;You are receiving this because&quot;'/>
    <apps:property name='doesNotHaveTheWord' value='dependabot'/>
    <apps:property name='label' value='Closed'/>
    <apps:property name='sizeOperator' value='s_sl'/>
    <apps:property name='sizeUnit' value='s_smb'/>
  </entry>
  <entry>
    <category term='filter'></category>
    <content></content>
    <apps:property name='hasTheWord' value='&quot;requested your review on&quot;'/>
    <apps:property name='label' value='Direct review request'/>
    <apps:property name='sizeOperator' value='s_sl'/>
    <apps:property name='sizeUnit' value='s_smb'/>
  </entry>
</feed>

2. Edit the template with your information

In the template, replace the following string:

  • @<YOUR_GITHUB_USERNAME_HERE> (in my case that would be @maximevaillancourt)

If you’re part of a GitHub team that other people mention in issues and pull requests, also replace the following string:

  • @<YOUR_GITHUB_TEAM_NAME_HERE> (for example, @my-organization/best-team-ever)

If you’re not part of a GitHub team, remove the <entry> node that looks for this pattern.

Finally, save the XML file.

3. Import the XML filters in Gmail

Now that your template is ready, it’s time to import it in your Gmail account.

Open up your Gmail settings, and click on the “Filters” tab. Once you’re there, find the “Import filters” link near the bottom of the pane. Select the XML file you just modified and upload it. Review the filters and and confirm the import.

That’s it - you’re done. At this point, your emails should be labeled automatically. Feel free to change the colors of the labels via the “Labels” tab in your Gmail settings. I find it helpful to make direct review requests red as this means “urgent and important” to me, and the label for merged pull requests is green because it means “success, everything’s good” to me.

Experiment to find what works best for you.

If you found this helpful, consider sharing it with your coworkers and friends. I’m always available on Twitter (@vaillancourtmax) to discuss. ✌️


How Shopify reduced storefront response times with a rewrite

Permalink - Posted on 2020-09-03 00:00

This post originally appeared on the Shopify Engineering blog on August 20, 2020.

In January 2019, we set out to rewrite the critical software that powers all online storefronts on Shopify’s platform to offer the fastest online shopping experience possible, entirely from scratch and without downtime.

The Storefront Renderer is a server-side application that loads a Shopify merchant's storefront Liquid theme, along with the data required to serve the request (for example product data, collection data, inventory information, and images), and returns the HTML response back to your browser. Shaving milliseconds off response time leads to big results for merchants on the platform as buyers increasingly expect pages to load quickly, and failing to deliver on performance can hinder sales, not to mention other important signals like SEO.

The previous storefront implementation‘s development, started over 15 years ago when Tobi launched Snowdevil, lived within Shopify’s Ruby on Rails monolith. Over the years, we realized that the “storefront” part of Shopify is quite different from the other parts of the monolith: it has much stricter performance requirements and can accept more complexity implementation-wise to improve performance, whereas other components (such as payment processing) need to favour correctness and readability.

In addition to this difference in paradigm, storefront requests progressively became slower to compute as we saw more storefront traffic on the platform. This performance decline led to a direct impact on our merchant storefronts’ performance, where time-to-first-byte metrics from Shopify servers slowly crept up as time went on.

Here’s how the previous architecture looked:

Old Storefront Implementation
Old Storefront Implementation

Before, the Rails monolith handled almost all kinds of traffic: checkout, admin, APIs, and storefront.

With the new implementation, traffic routing looks like this:

New Storefront Implementation
New Storefront Implementation

The Rails monolith still handles checkout, admin, and API traffic, but storefront traffic is handled by the new implementation.

Designing the new storefront implementation from the ground up allowed us to think about the guarantees we could provide: we took the opportunity of this evergreen project to set us up on strong primitives that can be extended in the future, which would have been much more difficult to retrofit in the legacy implementation. An example of these foundations is the decision to design the new implementation on top of an active-active replication setup. As a result, the new implementation always reads from dedicated read replicas, improving performance and reducing load on the primary writers.

Similarly, by rebuilding and extracting the storefront-related code in a dedicated application, we took the opportunity to think about building the best developer experience possible: great debugging tools, simple onboarding setup, welcoming documentation, and so on.

Finally, with improving performance as a priority, we work to increase resilience and capacity in high load scenarios (think flash sales: events where a large number of buyers suddenly start shopping on a specific online storefront), and invest in the future of storefront development at Shopify. The end result is a fast, resilient, single-purpose application that serves high-throughput online storefront traffic for merchants on the Shopify platform as quickly as possible.

Defining our success criteria

Once we clearly outlined the problem we’re trying to solve and scoped out the project, we defined three main success criteria:

  • Establishing feature parity: for a given input, both implementations generate the same output.
  • Improving performance: the new implementation runs on active-active replication setup and minimizes server response times.
  • Improving resilience and capacity: in high-load scenarios, the new implementation generally sustains traffic without causing errors.

Building a verifier mechanism

Before building the new implementation, we needed a way to make sure that whatever we built would behave the same way as the existing implementation. So, we built a verifier mechanism that compares the output of both implementations and returns a positive or negative result depending on the outcome of the comparison.

This verification mechanism runs on storefront traffic in production, and it keeps track of verification results so we can identify differences in output that need fixing. Running the verifier mechanism on production traffic (in addition to comparing the implementations locally through a formal specification and a test suite) lets us identify the most impactful areas to work on when fixing issues, and keeps us focused on the prize: reaching feature parity as quickly as possible. It’s desirable for multiple reasons:

  • giving us an idea of progress and spreading the risk over a large amount of time
  • shortening the period of time that developers at Shopify work with two concurrent implementations at once
  • providing value to Shopify merchants as soon as possible.

There are two parts to the entire verifier mechanism implementation:

  1. A verifier service (implemented in Ruby) compares the two responses we provide and returns a positive or negative result depending on the verification outcome. Similar to a `diff` tool, it lets us identify differences between the new and legacy implementations.
  2. A custom nginx routing module (implemented in Lua on top of OpenResty) sends a sample of production traffic to the verifier service for verification. This module acts as a router depending on the result of the verifications for subsequent requests.

The following diagram shows how each part interacts with the rest of the architecture:

Legacy implementation and new implementation at the same conceptual layer
Legacy implementation and new implementation at the same conceptual layer

The legacy implementation (the Rails monolith) still exists, and the new implementation (including the Verifier service) is introduced at the same conceptual layer. Both implementations are placed behind a custom routing module that decides where to route traffic based on the request attributes and the verification data for this request type. Let’s look at an example.

When a buyer’s device sends an initial request for a given storefront page (for example, a product page from shop XYZ), the request is sent to Shopify’s infrastructure, at which point an nginx instance handles it. The routing module considers the request attributes to determine if other shop XYZ product page requests have previously passed verification.

First request routed to Legacy implementation
First request routed to Legacy implementation

Since this is the first request of this kind in our example, the routing module sends the request to the legacy implementation to get a baseline reference that it will use for subsequent shop XYZ product page requests.

Routing module sends original request and legacy implementation’s response to the new implementation
Routing module sends original request and legacy implementation’s response to the new implementation

Once the response comes back from the legacy implementation, the Lua routing module sends that response to the buyer. In the background, the Lua routing module also sends both the original request and the legacy implementation’s response to the new implementation. The new implementation computes a response to the original request and feeds both its response and the forwarded legacy implementation’s response to the verifier service. This is done asynchronously to make sure we’re not adding latency to responses we send to buyers, who don’t notice anything different.

At this point, the verifier service received the responses from both the legacy and new implementations and is ready to compare them. Of course, the legacy implementation is assumed to be correct as it’s been running in production for years now (it acts as our reference point). We keep track of differences between the two implementations’ responses so we can debug and fix them later. The verifier service looks at both responses’ status code, headers, and body, ensuring they’re equivalent. This lets us identify any differences in the responses so we make sure our new implementation behaves like the legacy one.

Time-related and randomness-related exceptions make it impossible to have exactly byte-equal responses, so we ignore certain patterns in the verifier service to relax the equivalence criteria. The verifier service uses a fixed time value during the comparison process and sets any random values to a known value so we reliably compare the outputs containing time-based and randomness-based differences.

The verifier service sends comparison result back to the Lua module
The verifier service sends comparison result back to the Lua module

The verifier service sends the outcome of the comparison back to the Lua module, which keeps track of that comparison outcome for subsequent requests of the same kind.

Dynamically routing requests to the new implementation

Once we had verified our new approach, we tested rendering a page using the new implementation instead of the legacy one. We iterated upon our verification mechanism to allow us to route traffic to the new implementation after a given number of successful verifications. Here’s how it works.

Just like when we only verified traffic, a request arrives from a client device and hits Shopify’s architecture. The request is sent to both implementations, and both outputs are forwarded to the verifier service for comparison. The comparison result is sent back to the Lua routing module, which keeps track of it for future requests.

When a subsequent storefront request arrives from a buyer and reaches the Lua routing module, it decides where to send it based on the previous verification results for requests similar to the current one (based on the request attributes

For subsequent storefront requests, the Lua routing module decides where to send it
For subsequent storefront requests, the Lua routing module decides where to send it

If the request was verified multiple times in the past, and nearly all outcomes from the verifier service were “Pass”, then we consider the request safe to be served by the new implementation.

If nearly all verifier service results are “Pass”, then it uses the new implementation
If most verifier service results are “Pass”, then it uses the new implementation

If, on the other hand, some verifications failed for this kind of request, we’ll play it safe and send the request to the legacy implementation.

If most verifier service results are “Fail”, then it uses the old implementation
If most verifier service results are “Fail”, then it uses the old implementation

Successfully Rendering In Production

With the verifier mechanism and the dynamic router in place, our first goal was to render one of the simplest storefront pages that exists on the Shopify platform: the password page that protects a storefront before the merchant makes it available to the public.

Once we reached full parity for a single shop’s password page, we tested our implementation in production (for the first time) by routing traffic for this password page to the new implementation for a couple of minutes to test it out.

Success! The new implementation worked in production. It was time to start implementing everything else.

Increasing feature parity

After our success with the password page, we tackled the most frequently accessed storefront pages on the platform (product pages, collection pages, etc). Diff by diff, endpoint by endpoint, we slowly increased the parity rate between the legacy and new implementations.

Having both implementations running at the same time gave us a safety net to work with so that if we introduced a regression, requests would easily be routed to the legacy implementation instead. Conversely, whenever we shipped a change to the new implementation that would fix a gap in feature parity, the verifier service starts to report verification successes, and our custom routing module in nginx automatically starts sending traffic to the new implementation after a predetermined time threshold.

Defining “good” performance with Apdex scores

We collected Apdex (Application Performance Index) scores on server-side processing time for both the new and legacy implementations to compare them.

To calculate Apdex scores, we defined a parameter for a satisfactory threshold response time (this is the Apdex’s “T” parameter). Our threshold response time to define a frustrating experience would then be “above 4T” (defined by Apdex).

We defined our “T” parameter as 200ms, which lines up with Google’s PageSpeed Insights recommendation for server response times. We consider server processing time below 200ms as satisfying and a server processing time of 800ms or more as frustrating. Anything in between is tolerated.

From there, calculating the Apdex score for a given implementation consists of setting a time frame, and counting three values:

  • N, the total number of responses in the defined time frame
  • S, the number of satisfying responses (faster than 200ms) in the time frame
  • T, the number of tolerated responses (between 200ms and 800ms) in the time frame

Then, we calculate the Apdex score:

$$\frac{s\ +\ t/2}{n}$$

By calculating Apdex scores for both the legacy and new implementations using the same T parameter, we had common ground to compare their performance.

Methods to improve server-side storefront performance

We want all Shopify storefronts to be fast, and this new implementation aims to speed up what a performance-conscious theme developer can’t by optimizing data access patterns, reducing memory allocations, and implementing efficient caching layers.

Optimizing data access patterns

The new implementation uses optimized, handcrafted SQL multi-select statements maximizing the amount of data transferred in a single round trip. We carefully vet what we eager-load depending on the type of request and we optimize towards reducing instances of N+1 queries.

Reducing memory allocations

We reduce the number of memory allocations as much as possible so Ruby spends less time in garbage collection. We use methods that apply modifications in place (such as #map!) rather than those that allocate more memory space (like #map). This kind of performance-oriented Ruby paradigm sometimes leads to code that’s not as simple as idiomatic Ruby, but paired with proper testing and verification, this tradeoff provides big performance gains. It may not seem like much, but those memory allocations add up quickly, and considering the amount of storefront traffic Shopify handles, every optimization counts.

Implementing efficient caching layers

We implemented various layers of caching throughout the application to reduce expensive calls. Frequent database queries are partitioned and cached to optimize for subsequent reads in a key-value store, and in the case of extremely frequent queries, those are cached directly in application memory to reduce I/O latency. Finally, the results of full page renders are cached too, so we can simply serve a full HTTP response directly from cache if possible.

Measuring performance improvement successes

Once we could measure the performance of both implementations and reach a high enough level of verified feature parity, we started migrating merchant shops. Here are some of the improvements we’re seeing with our new implementation:

  • Across all shops, average server response times for requests served by the new implementation are 4x to 6x faster than the legacy implementation. This is huge!
  • When migrating a storefront to the new implementation, we see that the Apdex score for server-side processing time improves by +0.11 on average.
  • When only considering cache misses (requests that can’t be served directly from the cache and need to be computed from scratch), the new implementation increases the Apdex score for server-side processing time by a full +0.20 on average compared to the previous implementation.
  • We heard back from merchants mentioning a 500ms improvement in time-to-first-byte metrics when the new implementation was rolled out to their storefront.

So another success! We improved store performance in production.

Now how do we make sure this translates to our third success criteria?

Improving sesilience and capacity

While working on the new implementation, the Verifier service identified potential parity gaps, which helped tremendously. However, a few times we shipped code to production that broke in exceedingly rare edge cases that it couldn’t catch.

As a safety mechanism, we made it so that whenever the new implementation would fail to successfully render a given request, we’d fall back to the legacy implementation. The response would be slower, but at least it was working properly. We used circuit breakers in our custom nginx routing module so that we’d open the circuit and start sending traffic to the legacy implementation if the new implementation was having trouble responding successfully. Read more on tuning circuit breakers in this blog post by my teammate Damian Polan.

Increase capacity in high-load scenarios

To ensure that the new implementation responds well to flash sales, we implemented and tweaked two mechanisms. The first one is an automatic scaling mechanism that adds or remove computing capacity in response to the amount of load on the current swarm of computers that serve traffic. If load increases as a result of an increase in traffic, the autoscaler will detect this increase and start provisioning more compute capacity to handle it.

Additionally, we introduced in-memory cache to reduce load on external data stores for storefronts that put a lot of pressure on the platform’s resources. This provides a buffer that reduces load on very-high traffic shops.

Failing fast

When an external data store isn’t available, we don’t want to serve buyers an error page. If possible, we’ll try to gracefully fall back to a safe way to serve the request. It may not be as fast, or as complete as a normal, healthy response, but it’s definitely better than serving a sad error page.

We implemented circuit breakers on external datastores using Semian, a Shopify-developed Ruby gem that controls access to slow or unresponsive external services, avoiding cascading failures and making the new implementation more resilient to failure.

Similarly, if a cache store isn’t available, we’ll quickly consider the timeout as a cache miss, so instead of failing the entire request because the cache store wasn’t available, we’ll simply fetch the data from the canonical data store instead. It may take longer, but at least there’s a successful response to serve back to the buyer.

Testing failure scenarios and the limits of the new implementation

Finally, as a way to identify potential resilience issues, the new implementation uses Toxiproxy to generate test cases where various resources are made available or not, on demand, to generate problematic scenarios.

As we put these resilience and capacity mechanisms in place, we regularly ran load tests using internal tooling to see how the new implementation behaves in the face of a large amount of traffic. As time went on, we increased the new implementation’s resilience and capacity significantly, removing errors and exceptions almost completely even in high-load scenarios. With BFCM 2020 coming soon (which we consider as an organic, large-scale load test), we’re excited to see how the new implementation behaves.

Where we’re at currently

We’re currently in the process of rolling out the new implementation to all online storefronts on the platform. This process happens automatically, without the need for any intervention from Shopify merchants. While we do this, we’re adding more features to the new implementation to bring it to full parity with the legacy implementation. The new implementation is currently at 90%+ feature parity with the legacy one, and we’re increasing that figure every day with the goal of reaching 100% parity to retire the legacy implementation.

As we roll out the new implementation to storefronts we are continuing to see and measure performance improvements as well. On average, server response times for the new implementation are 4x faster than the legacy implementation. Rhone Apparel, a Shopify Plus merchant, started using the new implementation in April 2020 and saw dramatic improvements in server-side performance over the previous month.

We learned a lot during the process of rewriting this critical piece of software. The strong foundations of this new implementation make it possible to deploy it around the world, closer to buyers everywhere, to reduce network latency involved in cross-continental networking, and we continue to explore ways to make it even faster while providing the best developer experience possible to set us up for the future.


Two Ruby apps, same code, different output

Permalink - Posted on 2020-08-13 00:00

I noticed something odd today while working on two different Ruby codebases. This simple line of Ruby behaved differently in both applications:

"luck".casecmp("L`auguste")

Executing "luck".casecmp("L`auguste") in application A returned -1, while executing it in application B returned 1.

“Did the alphabet change at some point and I didn’t get the memo?”, I thought.

Aside

String#casecmp is a built-in Ruby method that returns -1, 0, 1, or nil depending on whether the object on which it’s called is less than, equal to, or greater than the function argument, and it does so in case-insensitive fashion. Here are a few simple examples of how it behaves:

"aBcDeF".casecmp("abcde")     #=> 1
"aBcDeF".casecmp("abcdef")    #=> 0
"aBcDeF".casecmp("abcdefg")   #=> -1
"abcdef".casecmp("ABCDEF")    #=> 0

Looking for monkey patches

Seeing as one of the applications is built on top of Ruby of Rails and the other isn’t, my first thought was that maybe there was a Rails and/or ActiveSupport patch on String#casecmp that would change the behavior of this line in one of the applications. However, I didn’t find anything that pointed to this. I kept digging, hoping to maybe find a patch in the other application that could explain this difference in behavior. Again, I didn’t find anything. 🙈

Different Rubies

Eventually, after exploring a bit more, I realized that both applications ran on different versions of Ruby: application A was on Ruby 2.6, while application B was using Ruby 2.7.

Running the same command on both versions of Ruby indeed gives us different results:

$ ~/.rubies/ruby-2.6.6/bin/ruby -e 'puts "luck".casecmp("L`Auguste")'
-1

$ ~/.rubies/ruby-2.7.0/bin/ruby -e 'puts "luck".casecmp("L`Auguste")'
1

Ah ha! We’re getting closer. While I could have called it a day here and simply updated application B to Ruby 2.7 to resolve the issue, I wanted to understand: what causes it?

Changelogs & binding.pry

I then started to comb through Ruby changelogs, trying to find if anything changed between Ruby 2.6 and Ruby 2.7 for String#casecmp, or anything somehow related to string comparison. I didn’t find anything.

Of course, it would be nice to debug this using binding.pry or other similar Ruby-level debugging tools by stepping into the String#casecmp call to see what’s going on inside. However, this doesn’t get us very far, as trying to use Ruby’s Tracer or binding.pry doesn’t really help.

Running this:

$ ruby -r tracer -e '"luck".casecmp("L`Auguste")'

… returns this output:

#0:-e:1::-: "luck".casecmp("L`Auguste")

… and not much else. That’s because String#casecmp is implemented in C, directly inside MRI’s string.c, so there’s no actual Ruby code underneath String#casecmp that we can step into using Ruby-level debugging tools.

Here comes the GDB part: because we’re essentially dealing with C code at this point, we can use GDB to understand what happens inside the call to String#casecmp. So with that, I fired up GDB for the first time in years (I typically work with Ruby, so GDB is not something I commonly use).

Identifying the root cause using GDB

Let’s see how to use GDB to understand why both Ruby 2.6 and Ruby 2.7 behave differently with the same input to String#casecmp.

I first prepared a simple Ruby file containing the source that replicates the issue:

# ~/casecmp.rb
puts "luck".casecmp("L`Auguste")

Notice that the second character in the input to casecmp is a backtick (`), which has ASCII code 96. This is relevant for paragraphs below.

In Ruby 2.7.0

Let’s start by firing up GDB with a self-compiled version of Ruby 2.7.0:

$ sudo gdb /Users/maximevaillancourt/.rubies/ruby-2.7.0/bin/ruby
Reading symbols from /Users/maximevaillancourt/.rubies/ruby-2.7.0/bin/ruby...

Then, we add a breakpoint on the str_casecmp function so execution pauses once we reach it:

(gdb) break str_casecmp
Breakpoint 1 at 0x1001fa766: file string.c, line 3371.

Perfect. We’re now ready to run the casecmp.rb Ruby script from above.

(gdb) run casecmp.rb
Starting program: /Users/maximevaillancourt/.rubies/ruby-2.7.0/bin/ruby casecmp.rb

We eventually hit the breakpoint we just set:

Thread 2 hit Breakpoint 1, str_casecmp (str1=4329352680, str2=4329352640) at string.c:3371
3371	    enc = rb_enc_compatible(str1, str2);

Aside

Internally, String#str_casecmp is quite simple: it iterates over each character in both inputs by index starting from the first character, converting both characters to the same case so that the function behaves in a case-insensitive way, and returns early if the two currently considered characters from each input are different. In doing so, it determines which character is “bigger” than the other using the character code (an ASCII code table is a useful asset to have nearby for the rest of this blog post).

In Ruby 2.7.0, notice that the case conversion converts both inputs to lowercase using TOLOWER:

while (p1 < p1end && p2 < p2end) {
  if (*p1 != *p2) {
    unsigned int c1 = TOLOWER(*p1 & 0xff);
    unsigned int c2 = TOLOWER(*p2 & 0xff);
    if (c1 != c2)
      return INT2FIX(c1 < c2 ? -1 : 1);
  }
  p1++;
  p2++;
}

After navigating in str_casecmp using next a few times, we enter the loop and arrive at a point where we can print c1 and c2, which are the codes for the characters at the current index for both inputs:

3382	                unsigned int c2 = TOLOWER(*p2 & 0xff);
(gdb) print c1
$11 = 108
(gdb) next
3383	                if (c1 != c2)
(gdb) print c2
$12 = 108

Here’s a visual representation of the buffers:

c1
 ↓
108  ?   ?   ?
 l   u   c   k

108  ?   ?   ?   ?   ?   ?   ?   ?
 l   `   a   u   g   u   s   t   e
 ↑
c2

108 is the decimal ASCII character code representation for the first letter of both inputs: l (lowercase “L”), so the loop continues to the next iteration because c1 and c2 are the same.

On the second iteration of the loop (on the second character of both inputs), we get the following results:

3382	                unsigned int c2 = TOLOWER(*p2 & 0xff);
(gdb) print c1
$14 = 117
(gdb) next
3383	                if (c1 != c2)
(gdb) print c2
$16 = 96

Here’s a visual representation of the buffers:

    c1
     ↓
108 117  ?   ?
 l   u   c   k

108  96  ?   ?   ?   ?   ?   ?   ?
 l   `   a   u   g   u   s   t   e
     ↑
     c2

c1 contains 117, which is the decimal ASCII character code representation for u, while 96 (in c2) is the character code for a backtick (`). We then enter the if (c1 != c2) conditional, and the return value is 1 because c1 > c2 (117 > 96).

Okay. So far so good. This lines up with the initial observation of the issue. How are things different in Ruby 2.6.6?

In Ruby 2.6.6

We do almost the same setup as above (same one-line Ruby script to replicate the issue, same breakpoint on str_casecmp), but we fire up GDB with Ruby 2.6.6:

$ sudo gdb /Users/maximevaillancourt/.rubies/ruby-2.6.6/bin/ruby
Reading symbols from /Users/maximevaillancourt/.rubies/ruby-2.6.6/bin/ruby...

(gdb) break str_casecmp
...

(gdb) run casecmp.rb
Starting program: /Users/maximevaillancourt/.rubies/ruby-2.6.6/bin/ruby casecmp.rb

Thread 2 hit Breakpoint 1, str_casecmp ...

Let’s look at the loop we presented above in Ruby 2.7.0, but in Ruby 2.6.6 this time:

while (p1 < p1end && p2 < p2end) {
  if (*p1 != *p2) {
    unsigned int c1 = TOUPPER(*p1 & 0xff);
    unsigned int c2 = TOUPPER(*p2 & 0xff);
    if (c1 != c2)
      return INT2FIX(c1 < c2 ? -1 : 1);
  }
  p1++;
  p2++;
}

Notice that instead of using TOLOWER as in Ruby 2.7.0, Ruby 2.6.6 uses TOUPPER. Interesting.

Let’s fast-forward to the part where we get to c1 and c2 for the second character in the input:

3414			unsigned int c2 = TOUPPER(*p2 & 0xff);
(gdb) next
3415	                if (c1 != c2)
(gdb) print c1
$5 = 85
(gdb) print c2
$6 = 96

Here’s a visual representation of the buffers:

     c1
     ↓
108  85  ?   ?
 L   U   C   K

108  96  ?   ?   ?   ?   ?   ?   ?
 L   `   A   U   G   U   S   T   E
     ↑
     c2

c1 is 85, which is the character code for U, and c2 is 96 (just like in Ruby 2.7.0), which is the character code for a backtick (`).

This time though, the comparison result is different, because c1 < c2 (85 < 96), so str_casecmp returns -1.

There it is: because Ruby 2.6 uses TOUPPER and Ruby 2.7 uses TOLOWER before comparing the inputs, and because one of the characters to compare is a backtick (`, which can’t be converted to uppercase or lowercase in any way), the other character’s code “moves” differently around the “fixed” backtick character code, affecting the result of the String#casecmp function.


To summarize, the root cause of the issue is that String#casecmp was updated in Ruby 2.7 to lowercase the two inputs before comparing them, while Ruby 2.6 used to uppercase the two inputs before comparing them. This is the commit where this change was introduced.

Fun debugging session. :)

Found a typo? Think I could clarify something? Reach out on Twitter (@vaillancourtmax).


Three hours of Swift

Permalink - Posted on 2020-08-09 00:00

I took three hours this morning to build the foundation of the iOS app for Freshreader. This was my first time writing Swift and seriously using Xcode (compared to opening it by mistake when trying to launch Visual Studio Code through Spotlight).

To document my learning journey, I thought I’d start taking more notes and share them in public so everyone can benefit from them. Without further ado, here’s what I learned in those three hours of discovering Swift:

SwiftUI is super intuitive

I would go as far as writing that SwiftUI is delightful to use. Building layouts is easy. Tweaking text styling is obvious. Once you understand the building blocks, everything makes sense.

String interpolation syntax

The string interpolation syntax in Swift is "\(variable)". This is the first time I see the backslash used as the starting token for string interpolation.

Function parameters can have two names

Function parameters can have up to two names: an internal one and an external one. This means that you can use one name to refer to the argument in the function implementation, but callers of that function will supply the argument using the external name. An example:

func increment(by amount: Int) {
    count += amount
}

Notice that in the function body, we use the amount name. However, when calling the function, we’ll use the by name:

increment(by: 5)

Both by and amount refer to the same value here (5), but it’s referred to by different names depending on whether we’re in the context of the function implementation, or outside the function as a caller.

Using two names is not mandatory, but it’s the first time I saw this concept of internal and external names for a single function parameter. Neat!


After just 3 hours of stumbling around and feeling my way through Swift, the iOS app for Freshreader is coming along pretty well already, which is encouraging. I simply implemented the reading list so far (which actually talks to the API, so this is real data from a production account):

image

There’s still a lot to do though:

  • Add a login screen (to save the account number)
  • Implement a “Slide to mark item as read” gesture
  • Handle errors
  • Add test coverage for ApiService and general app behavior

I livestreamed these three hours on Twitch this morning, and intend to do so every weekend if possible, while continuing to take notes as I go to document my learnings.

Learning in public is fun!

(normalize failing in public!)


Accelerating software onboarding with code walkthroughs

Permalink - Posted on 2020-07-28 00:00

By the end of this article, you’ll have a new tool in your toolbox to help newcomers quickly discover how your software project works, while reducing the time you explicitly spend introducing new folks to your project.


Ramping up on a new software project is hard, both for the person ramping up and for the team that supports this person’s onboarding process. It’s especially worse when multiple people start onboarding the same project at the same time, which happened to my team recently: our project’s area of influence has grown quickly recently, and as a result, we’re seeing dozens of developers starting to contribute to the project.

This sudden spike in interest lead to what I call “onboarding load” for our core team, as we suddenly faced pressure to help newcomers learn and navigate the codebase, while simulteanously trying to keep up with our day-to-day tasks.

Deflecting pressure

To reduce this pressure on our core team, I recently introduced a “code walkthrough” in the project’s codebase. It’s essentially a collection of high-quality, high-context code comments “tagged” with a [docs/walkthrough] line comment. When newcomers come our way and want to learn more about our project, I point them to a GitHub search for that special identifier in the project’s codebase to pull up all those great code comments to learn about the project. (They could also clone the code locally and search for that identifier using their code editor, but a GitHub search is available to everyone.)

Here are some examples of such high-context comments and the different purposes they serve:

Navigation and flow

# [docs/walkthrough]
# This module is responsible for generating the content to insert in the <head>
# HTML tag across all pages.
# [docs/walkthrough]
# This class is the entrypoint to all output generation. From here, content
# flows down into controllers, then into serializers and formatters.

Performance

# [docs/walkthrough]
# This is an example of an implementation that doesn't feel like it's the
# right way to do it, but after looking at profiles and traces, we know
# that this implementation performs better than a SQL-only solution.
#
# There's currently no good index on the `xyz` table to quickly search
# for nested values, so it ends up being faster to filter through all
# values in Ruby than it is to filter those out using a `WHERE`
# clause at the database level. We're looking into remodeling this data.

Trivia

# [docs/walkthrough]
# Even though this class usually handles objects of type X, this one
# is an exception because it can be overridden by people manually creating
# resource of the same name.

Resiliency

# [docs/walkthrough]
# Historically, we kept track of the resource count by incrementing a value in a
# key-value store whenever a resource was created. However, because of scaling
# constraints, we started tracking this value using a proper data ETL process.
# We now simply read the count from that data store and cache it aggressively.

Implementation

# [docs/walkthrough]
# By default, this class (including its subclasses) doesn't expose any method
# to the outside world. To do so, methods must be marked with the "world_public" 
# identifier. This is a safe-by-default way to avoid allocating wrapper objects,
# while still allowing public methods to be created for internal use.

The benefit of this approach is that the documentation is built into the code, so the risk that it becomes stale or outdated is much lower than if that documentation lived outside of the code itself.

One thing you can play around with is moving up and down levels of abstraction throughout the comments. Some of them may explain why a specific function is implemented this way, while others may explain the business domain history behind this legacy module.

Sounds great! How do I get started?

As a starting point, identify existing high-quality comments in your project, and tag them with the special identifier of your choice. Then, look for areas in the codebase that would benefit from having a little bit more context and documentation, and take some time to write clear, helpful comments that you can add that special tag to. Finally, once you have a generous collection of high-quality comments with that special identifier, start pointing newcomers to search for this identifier, and let them soak up all that context.

As the documentation shepherd on your team, always be on the lookout for opportunities to write more of those high-context code comments: you’ll be rewarded for it in the future, either by your colleagues thanking you for writing those comments that helped them step up quickly, or by your boss for reducing the “onboarding load” I mentioned earlier on the rest of the team, who instead were able to focus on the task at hand.

Great software documentation is like a flywheel. It’s hard to write at first, and the return on investment is not immediately visible. But with time, it compounds, and eventually, you’ll wonder how you managed to live without it.

What are some of ways you’re helping people onboard your software development team? Reply on Twitter here:


Just Write, part 2

Permalink - Posted on 2020-06-29 00:00

6 years ago, I wrote a piece called “Just Write”, in which a younger version of myself explains how grabbing pen & paper and writing down the stream of thoughts going around in our minds helps clarify these thoughts into connected, organized concepts. Here’s an excerpt:

Grab a pen. Write down everything that’s on your mind.

You don’t really know how to start it off. “Just start”, as the saying goes. Write down the first thing that comes to your mind. Anything, really. Start with the one thing that stresses you out the most. Explain it in detail.

After a dozen lines of disorganized bits of thoughts, something happens: your synapses connect and information rushes through your brain. You understand your thoughts, you spot patterns.

A few more lines later, your mind feels clearer. Everything shapes up. You feel at peace. You keep writing.

Evergreen content. This is still valid, and will likely always be.

Simply putting the mind on the task of writing requires to it organize itself to better convey what it wants to share, hence helping itself silence the noise.

Since publishing this post 6 years ago, I developed the habit of writing things down when I start to feel overwhelmed. This is one of the best habits I put in place since then, as it lets me identify what I need to focus on the most. Combined with a bit of stoic philosophy, writing lets me clear my mind and redirect the noise towards action.


Whenever I feel overwhelmed, my first reflex is to grab a pen and paper (quick! gotta write this down otherwise I won’t remember it all). If I don’t have pen/paper nearby I’ll go for the closest note-taking device around, which is usually my smartphone with the excellent Standard Notes app. Once I’m ready to write, I listen to the thoughts in my mind, pick the one that screams the loudest, and write down what I hear. I don’t go in much detail about it—I just write until the thought is tired to scream and eventually silences itself. Then I move on to the next thought in line.

I write every item one by one, just to get it out of my brain. I don’t even write full sentences, mostly keywords and basic sentences. The goal of the exercise is just to write down whatever comes to mind to free it from there and stop wasting precious energy.

Eventually, there won’t be any more thoughts waiting in line to scream anything. There won’t be anything else to write down.

At this point, it’s time to triage what was written down, and that’s the fun part. That’s where it all pays off.

First, I identify the items I can’t do anything about: the non-actionable items. My lists often include a few of these, which is a sign I tend to worry about things I cannot control. The good thing about these items is that the actionable is simple: acknowledge and move on promptly. A few examples:

  • Waiting for someone to complete something we need to continue working (they won’t go faster if we worry about it)
  • Worrying about the weather for our next weekend adventure (we really can’t blow those clouds away ourselves)
  • Thinking about something embarrassing we did years ago (nothing we can do about that anymore)

Then, I look at the remaining items (which should all be actionable somehow, even if indirectly so) and figure out how to tackle them individually. Every single one of these items gets its own action step aiming to return to a healthy, stable state of mind (cognitive consonance!). Here are a few examples and their action items:

  • Reach out to someone we forgot to message back
  • Don’t forget to take out the trash tomorrow
  • Clean bikes before road trip
  • Backup computers

At this point, our minds are clear, and we have a concrete list of actionables to tackle. Then, we repeat the exercise when we feel overwhelmed again.

David Perell strongly believes that writing (online) is the biggest opportunities in the world today. I think writing (offline) is an equally powerful opportunity for our own mental health. Or as Anne-Laure Le Cunff would write, “mental wealth”. When we combine the two by writing online (to share thoughts, remix ideas, get feedback) and offline (to recharge, reflect, reset), and we get a pretty powerful life-building habit.


Setting up your own digital garden with Jekyll

Permalink - Posted on 2020-05-20 00:00

Eager to try the demo of the template? 👉 digital-garden-jekyll-template.netlify.app

Digital gardens and public note-taking spaces are all the rage these days, as they’re a great way to foster an environment where ideas mesh together and others can take inspiration from. You can set up a digital garden of your own in a few minutes, and have your own personal corner of the Internet where you’ll seed and grow ideas.

If you’re familiar with Markdown and/or HTML, you’ll be right at home, as that’s how your notes will be formatted.

The end result will look similar to this:

Without further ado, let’s get started!

Instructions

0. Set up prerequisites

For this tutorial, we’ll need to install a few things on your machine (you may have some of these already). Following the instructions on each website to install them.

You’ll also need to create accounts on the following services:

  • GitHub (to store your digital garden files)
  • Netlify (to serve your digital garden website to the world)

Once everything is set up, let’s start creating your own digital garden.

1. Create a fork of the template repository

To simplify things, I provide the template showed in the image above to get started. You can always tweak this template to your taste later.

Visit the GitHub page for my template repository (maximevaillancourt/digital-garden-jekyll-template), and fork it to your account using the Fork button:

Once the forking process is complete, you should have a fork (essentially a copy) of my template in your own GitHub account. On the GitHub page for your repository, click on the green “Clone or download” button, and copy the URL: we’ll need it for the next step.

2. Clone your repository locally

Next, we want to download the files from your GitHub repository onto your local machine. To do this, replace <YOUR_COPIED_URL_HERE> in the command below with the URL you copied in the previous step, then execute this command:

$ git clone <YOUR_COPIED_URL_HERE> my-digital-garden

As a reference point, this is how it looks like for me (the difference is likely just the GitHub username):

$ git clone git@github.com:maximevaillancourt/digital-garden-jekyll-template.git my-digital-garden

Then, navigate into the directory that was just created:

$ cd my-digital-garden

3. Test out the site locally

Sweet! You now have your repository’s source code on your machine. Within the my-digital-garden directory, run the following command to install the necessary dependencies like Jekyll:

$ bundle

Once that’s done, ask Jekyll to start serving the site locally:

$ bundle exec jekyll serve

Then, open up http://localhost:4000 in your browser.

If everything’s done correctly, you should now see the home page of your digital garden. 🎉

Keep in mind that this site is only available locally (notice the localhost part of the URL), so if we want it to be available on the Internet for everyone to enjoy, we need to deploy it to the Internet: we’ll use Netlify for that in the next step.

4. Connect your GitHub repository to Netlify

Netlify lets you automatically deploy your digital garden on to the Internet when you update your GitHub repository. To do this, we need to connect your GitHub repository to Netlify:

  1. Log in to Netlify
  2. Once logged in, click the “New site from Git” button
  3. On the next page, select GitHub as the continuous deployment provider (you may need to authorize the connection, in which case, approve it)
  4. On the next page, select your digital garden repository from the list
  5. On the next page, keep the default settings, and click on “Deploy site”.

That was easy! We’re almost done.

Wait a couple of minutes for the initial deploy to complete.

Once that’s done, your digital garden should be available on the Internet via a generic Netlify URL, which you can change to a custom domain later if you’d like.

Now the cool thing is this: whenever you push an update to your GitHub repository, Netlify will automatically deploy your updates to the Internet.

5. Start tending to your digital garden

At this point, you can start updating the files on your machine (in the my-digital-garden folder) to change your digital garden to your liking: update the copy, add some notes, tweak the layout, customize the colors, etc. Once you have something you’re happy with, push your changes to your GitHub repository with the following commands:

$ git add --all
$ git commit -m 'Update content'
$ git push origin master

If that command succeeds and the rest of the tutorial was done correctly, in a couple of minutes, you should see your changes live on your Netlify website. 🚀

And we’re done! You now have your own digital garden. Take care of your mind and the rest will follow. 🍃


If you’re curious, take a look at my own (tiny) digital garden right here.

Similarly, if you made it this far, you’ll likely want to join the “Digital Gardeners” Telegram group: we’re a likeminded bunch of folks nerding out on this digital garden thing. Join us! 🧠

If you have any feedback regarding this tutorial, please reach out to me on Twitter (@vaillancourtmax): I’ll be happy to help you out!


Why I still use a ThinkPad X220 in 2019

Permalink - Posted on 2019-11-24 00:00

Update (2020-05-18): I’ve since switched to a more powerful desktop computer for my photography business. I still use the X220 as a dedicated machine to connect with Zwift, a social cycling app. 🚴


My personal machine is a 8-year-old Lenovo ThinkPad X220 running Ubuntu 18.04 with i3wm.

It does not have a single USB-C port. It sports a 1366x768 TN display panel (in case you’re wondering, you can see each individual pixel with your bare eyes). The battery life of the 4-cell battery is horrendous (I barely get 2 hours out of this thing). The trackpad is so small, one could even wonder if it’s a trackpad for ants (no need to say that I never use it). The Wi-Fi card only supports 2.4GHz networks, so hopes for blazingly fast Wi-Fi are to be pushed aside. There’s even a bit of gaffer tape on the bottom left corner of the body to hold the cracked plastic together.

Lenovo ThinkPad X220 with lid open
Speaking of the devil...

At my day job, I’m lucky enough to work on a top-spec MacBook Pro provided by my employer. It has a glorious Retina display, 16GB of RAM, a modern Core i7 CPU, and a huge trackpad to boot. All in all, it’s a pretty fancy machine, one that many people would love to use as a daily driver.

I mean, let’s just compare the trackpads for a second. It’s almost funny at that point.

Comparison of trackpad size between Lenovo ThinkPad X220 and 15 inch MacBook Pro
Trackpad size comparison. ThinkPad X220 on the left. 15" MacBook Pro on the right.

Surprisingly though, out of the two laptops, my favourite is not the MacBook Pro. When I’m at home, I tuck the aluminium slab away and take out the magnesium brick that is the ThinkPad X220.

It’s not pretty. It’s not particularly fast. But it does everything I need, and it’s always ready for everything I throw at it. Could it be a bit of nostalgia for old school hardware? Maybe.


I strongly believe a Lenovo ThinkPad X220 is still a terrific laptop to use in 2019 and beyond. It’s not for everyone, but the X220 definitely sparks joy. Plus, its accessible price point makes it almost impossible to ignore. I got mine second-hand (third-hand? fourth-hand? I don’t even know) in great condition for less than 200$ in Canada.

In practical terms: the X220 plays 1080p videos from YouTube wonderfully, renders Portal 2 quite happily (albeit with lower graphics quality than what you may usually enjoy on higher-end machines), and is a perfect machine to dual boot Windows on for maximum value. I spend most of my time in a Web browser or CLI tools, so it’s not like I’m running complex simulations, but still.

The X220, just like any other classic ThinkPad, is extensible, sturdy, reliable, and provides everything I could ask for in a laptop.

Extensibility

The classic ThinkPad laptops have “extensibility” written all over them. Most components are user-replaceable. In “ship of Thesus” fashion, if you individually replace every single component of the X220 one at a time, is it still the same X220?

Seriously though, just look at this list:

  • User-replaceable display
  • User-replaceable wireless card
  • User-replaceable keyboard
  • User-replaceable RAM
  • User-replaceable battery
  • User-replaceable 2.5” storage and mSATA

That’s a list many laptop owners can only dream of. Laptops are increasingly shut tight deliberately, preventing users from fixing and/or upgrading their devices themselves. Not with a classic ThinkPad though.

I previously owned a ThinkPad X230, which many consider to be part of the last generation of “classic” ThinkPads. Multiple components of that X230 had been upgraded: IPS display, SSD storage, additional RAM, backlit keyboard in my native language, new 9-cell battery, etc. It was a dream machine, and the X220, just like other classic ThinkPads, offers the same extensibility and user-friendly servicing. I eventually bricked the X230 by spilling water in the underside RAM slot (weird accident, don’t ask).

After bricking the X230, I purchased a second-hand ThinkPad X250 on eBay, only to sell it a few weeks later as it’s a huge step backwards compared to the X220/X230: there’s only one user-accessible RAM slot in the X250 (instead of two). The rest of the RAM is soldered to the board. The keyboard is user-replaceable, but to do so you need to take the entire computer apart (instead of just replacing the keyboard directly as with the X220/X230). Like, what? Who thought that was a good idea?

Compatibility

Being a 2011 laptop, it also features various ports that some modern laptops users may only have heard of. At work, where everyone uses a top-of-the-line MacBook Pro, it’s like some sort of utopia where everything is wireless, and we don’t ever need to use the USB-C ports for anything other than charging the laptops or connecting to a giant 4K display.

In the real world, however, you’d need a handful of adapters with a MacBook Pro to connect to the rest of the world. The X220 provides everything you could practically ask for here:

  • USB-A ports (3 of them!)
  • SD card slot
  • Digital video out (DisplayPort)
  • Analog video out (good old VGA)
  • Ethernet port
  • Kensington lock
Left hand side of ThinkPad X220, showing two USB-A port, SD card slot, VGA out, DisplayPort, and physical Wi-Fi killswitch
Left side of the X220. A whole world of connectivity awaits.

Plus, there are other goodies about this machine:

  • 7-row keyboard
  • Visual status indicators (Wi-Fi, Bluetooth, battery, storage I/O)
  • Physical Wi-Fi killswitch
  • ThinkLight above the screen for late-night hacking sessions

Reliability

I like to think that a classic ThinkPad is akin to a Toyota Corolla, one of the most (if not the most) reliable production cars ever produced. Give it a good and thorough clean up once a year, change the oil at regular intervals, keep your software up-to-date, and you’ll enjoy this machine for a long time.

Classic ThinkPads just feel like business. They won’t let you down. The /r/thinkpad sub-reddit is full of classic ThinkPads (some of them I would even call “retro” instead of “classic”), and these things just keep on running, decades after the initial release date.

The laptop’s shell is made out of magnesium instead of plastic, making it extra sturdy. The keyboard feels great. Not your typical cheap keyboard from your run-of-the-mill HP laptop. The display hinges are solid.

Again, going back to the X250 I used for a couple of weeks: it felt cheap compared to the X220/X230. The shell was made out of plastic, the display seemed fragile, and the trackpoint buttons felt flimsy. Not a great experience coming from a X230.

That’s when I knew I’d go for the X220, and stay for a while.


If you’re looking to purchase a second-hand classic ThinkPad, do it. Buy the thing. Slap GNU/Linux distribution on there (or *BSD, if you’re into that sort of thing), and have fun.


On sleepiness, activation energy, and flow

Permalink - Posted on 2019-09-30 00:00

I noticed something noteworthy a few weeks ago.

It seems that if I sleep a little less than what is usually recommended, say approximately 6 hours instead of the usual 7-9 hours, I will find it easier to get started on tasks and keep the ball rolling throughout the day. In other words, sleeping less makes me more productive.

To be clear, this observation does not fit in the overall picture painted by modern science, which notes that adequate sleep leads to improved health and productivity.

Now, what I observe is more than likely to be a false impression—a mere feeling that I’m more productive when really I’m just as productive as usual or even less so—but it’s something I’ve been able to reproduce often enough to feel like there must be something going on here, as it’s something I can temporarily leverage and benefit from. It’s also worth pointing out that this very biased, non-scientific experiment on a sample size of 1 should not be taken seriously. I’m simply documenting what I’ve observed.


In chemistry, there’s a concept called “activation energy”, which is the quantity of energy that must be provided to a system in order to generate a reaction. Unless a system is supplied enough energy to pass the threshold required to get things moving, the system won’t budge, and will remain inactive.

Imagine standing at the base of a hill with a ball at your feet (this is the system). You want to kick the ball to your friend over on the other side on the mountain (this is the reaction). If you kick the ball gently, it will roll up the mountain for a few meters, then roll back down to your feet—insufficient activation energy to trigger the reaction. Only if you kick the ball strong enough will it reach the top of mountain and roll over to the other side.

Drawing a parallel with psychology and human behavior, we can use activation energy as a mental model for motivation and procrastination.

When faced with a given task, the human brain must come up with the required activation energy to get started and get the ball rolling onto the other side of the mountain. Without it, we remain still, unable to get started for a long period of time.


Analysis paralysis. Indecision & inaction. Overthinking and not moving. These are all related.

I sometimes get stuck and paralyzed by overthinking and overanalyzing a system. This indecision leads to inaction, why means I’m stuck at square one where I continue analyzing the situation, hoping eventually I’ll have enough information to get started.

In my experience, getting less sleep leads to a natural ability to short the above circuit and get moving immediately, while also reducing the time to reaching a state of flow.

If I dig in a little more, it seems to have to do with a desire to get it over with whatever task is at hand, and a drive to move on to the next thing quickly. This, combined with a willingness to be more scrappy resourceful as time passes, leads to a practical sense of acceleration that I haven’t been able to replicate in any other way.

If you’ve experienced something similar, I’d love to hear about it more. At any rate, I’m happy to discuss this further over on Twitter (@vaillancourtmax) or Hacker News.


I didn’t know any better

Permalink - Posted on 2019-02-28 00:00

One day, when I was in elementary school (mid-2000s), I decided I would create a personal website to share games, news, and other cool tidbits of my life with my friends and family. I also wanted it to be password protected so that only my friends would see it, and to prevent big bad internet strangers from seeing private information (oh how times have changed).

However, I didn’t know anything about HTTP authentication, HTML forms, databases, or security best practices. I couldn’t care less, and figured I would find a way. I was, after all, in elementary school. Like, 10 years old maybe.

So I set out to build that website, and used the one technology I knew could achieve this: Flash.

That’s right, good old .swf files and all.

I first implemented the password protection this way: within the Flash view, there was a login screen with a single field where my friends would type in the password. Naturally it was a hardcoded password, straight in the .swf, because why the hell not, I’m 10 years old, this is fine.

What is this sound I hear? Ah, yes, that’s the sound of security engineers from around the world screaming in unison. Yes. I know. Again, I was 10 years old.

I encountered a problem pretty early though: what if I wanted to show different things to different people? Hmm.

Instead of sharing the protecting the website with the same password for all my friends and family members, I would need them to each have their own password (because duh, security), and on top of that, surface different things to different people. So…

Again, with the infinite creativity of a 10-year-old who has no idea what they’re doing, I found a brilliant solution:

Create a separate .swf file for each of my friends, with a different hardcoded password in every of them, and serve them all as different files on my ISP-provided web server.

YES. I KNOW.

Amazing, isn’t it?

I just didn’t know any better.


Don’t fear asking questions

Permalink - Posted on 2018-11-11 00:00

These days, a few peers of mine and I are working on a software development client project involving a React Native mobile app, an Express.js API, and a React web admin. While I’ve been developing Web applications using JavaScript for some time now, this is their first project using JavaScript.

This implies learning a new programming language and its tooling, as well as a adapting to a completely different mindset than their object-oriented background. I have to say, I applaud their enthusiasm and motivation to go through the hoops of starting a completely new project with a new programming language.

Naturally, this leads to my peers hitting roadblocks that they first try to fix by themselves. After a few attempts and little bit of frustration, they usually turn to me for help, which I’m more than happy to give. This usually happens after ~15 minutes (see Intercom’s 15 Minute Rule).

The other day, I had earbuds in, and I could just feel that one of my peers had trouble with an issue, so I removed one earbud and gently asked “What’s up?”. They explained the issue, asked if I could help, and I then explained I didn’t know how to fix it… yet.

The first thing I do after I’ve gone through my mental catalog of past issues and fixes, is ask Google. I usually just plug in the error message plus a few related keywords in the search bar, and hope for the best. Most of the time, I find something within ~5 minutes. Great. Problem solved. Let’s move on to the next thing.

So after I told him I didn’t know how to fix it (yet), I did what I usually do: I said “let me see what I can do”, and turned to Google.

That’s when they said something that surprised me. It went a little like this:

“You know, when I ask you a question and you drop everything you’re doing to start searching for my issue, I feel like I’m distracting you from whatever you were doing. You can just tell me you don’t know and I’ll keep debugging.”

I then realized that I might have unintentionally communicated that I was feeling distracted when others were asking for help and that I did not want to help, which would lead them to feel bad about asking for help. To me, it’s quite the opposite: I then explained to them that when I help others out, it’s a win-win situation—the person facing the issue gets and can continue working, and both parties get to learn from whatever the issue at hand.

Of course, asking questions every 10 minutes or asking questions before even trying to fix the issue is a step in the wrong direction. It’s all about balance between proactive problem solving and knowing when to seek help.

Key takeaways:

  • When facing an issue, don’t fear asking questions;
  • When helping others out, make sure your vocabulary and body language communicate that you’re willing to help.

Questions ⇒ knowledge.


Challenge your own beliefs and opinions, one tab at a time

Permalink - Posted on 2018-02-24 00:00

TL;DR: I just published my first browser extension, which displays a random popular post from the /r/ChangeMyView subreddit on the New Tab page. Try it now (Firefox, Chrome).

I love the /r/ChangeMyView subreddit. For newcomers, CMV is essentially a forum filled with people stating an opinion they have on a given topic, and asking others to give counter-arguments. Here’s an example thread by /u/filipovskii_off:

CMV: The main purpose of any government is to stay in power.

Here, the original poster is basically asking others to convince him otherwise. One of the answers (by /u/Pinewood74) is quite interesting:

“[Having] power” isn’t synonymous with “attempting to stay in power”. … You can govern, but not actively attempt to stay in power. Let’s say a powerful counter-government party was starting to develop, but you couldn’t do anything about it without violating the bill of rights, so you instead just keep working at things you can do something about like unemployment. That’s an example of governing, but not attempting to stay in power.

Why, isn’t that interesting. Maybe that changed your view, too.

What’s more, the nature of threads in /r/ChangeMyView ranges from deeper subjects (politics, religion, depression) to lighter subjects (ice cream, dinosaurs, even pop-tarts).

Comments on CMV posts are packed with intelligent discussion and respectful debate between users. I’ve learned many things from this subreddit, and have gained a better understanding of our world through the arguments presented in the comments.

To sum it up: /r/ChangeMyView is a great place to see the world through the eyes of other human beings, and it’s a great way to remind oneself that opinions can be changed quite easily once you understand a certain topic a little better.

To get in that growth mindset mode more often, I created a browser extension that simply displays a random popular post from the CMV subreddit so you can get your beliefs challenged on the regular.

A perfect way to foster that growth mindset.

Give it a try (Firefox, Chrome), and share your feedback. Also, the extension is open source.


Inspecting a “USB Drop” Attack Using olevba.py

Permalink - Posted on 2017-04-07 00:00

TL;DR: Never plug USB keys you find laying around in your computer. They may contain malware that silently deploys as you plug it in, stealing sensitive information and downloading viruses in the background.


For those of you who may know me, I’m the kind of guy who runs Linux on his computer full-time (except for the occasional music production or video editing session). This means no Windows vulnerability to worry about, no random update that starts while I’m working… and also no Microsoft Office suite. This is important to the story, so keep it in mind.

It all started with a regular old bright orange USB key that was left laying around on a meeting room table.

The key itself wasn’t identified and did not appear to give away any information about its owner, so I thought I would just do my good deed of the day and try to find some sort of IF_FOUND.txt file to identify its owner and give it back to them. So I proceeded to boot up my Linux partition (you can never be too cautious!), then plugged in the key.

At first glance, the key contained seemingly important (and sensitive!) information about the company’s assets and strategic planning. I first wondered why anyone would think it safe to carry such valuable documents on a friggin’ unencrypted USB key (and most importantly, forgetting it in a meeting room). Then, in my attempt to identify the owner of the key, I opened a random folder and started browsing through the files.

A few seconds in, two oddidities made me realize that something wasn’t quite right.

First, all the documents on the key had the .docm extension, which are files known as “Office Word Documents with Macros”, basically meaning that they’re Word documents with Visual Basic for Applications (VBA) code baked in. In Excel spreadsheets and Access databases especially, VBA Macros allow for improved functionality and useful sequences of actions that can be used to automate otherwise mundane tasks.

The other thing that ticked me off was the Date modified attribute of the files on the key: they were all set to the exact same date. I mean, yes, the files could all have been copied together and their Date modified fields resetted to the same moment, but still. It just didn’t feel right.

The combination of the way the key was left on the table AND the two oddities I found led me to think about a completely different scenario:

This wasn’t a regular old “normal” USB key. Somebody was trying to fool me, and I needed to delve deeper.


I took the risk of opening one of the Word documents using LibreOffice (with macros disabled, of course), but was left disappointed when I saw that the document was completely empty. I tried opening another file, hoping to see something in there, but alas, it was completely empty as well.

The rabbit hole was deepening yet again.


Thinking about the .docm extension, I quickly understood that the documents probably contained malicious VBA code, but I did not know exactly what kind of stuff I was dealing with. I figured I would just keep digging until I would find something. After a quick Google search, I stumbled upon the excellent olevba.py tool, part of the oletools collection, which parses MS Office documents to detect VBA macros then extract their source code.

Starting with MS Office 2007, Word documents are really just OpenXML files wrapped in .doc/.docm/ .docx extensions, so after extracting one of the documents and browsing through the extracted files, I found an interesting binary file which was significantly larger than the others, appropriately named vbaProject.bin. I was pretty confident that this was the file that contained malicious VBA code, probably downloading malware from a remote server and executing it on the victim’s machine (in this case, my computer, but since I was not on Windows, the chances of being attacked were pretty slim).

However, extracting the source of the vbaProject.bin file using the olevba.py tool gave me a completely different answer:

Huh. So this was not malware, after all. This Word document was simply sending the hostname and username to a remote server to identify who had plugged in the rogue USB key and opened one of its files. I then realized that this was an infosec effort to identify imprudent employees who used the (intentionally left behind) USB keys (and who should never do that, unless they absolutely know and trust the USB key).

I was pleasantly surprised to see that the subroutine actually used MsgBox to create a popup announcing to the imprudent user that “the file is corrupted and cannot be opened”.

Long story short: only use USB keys you know and trust.


“Who is this KodekPL pushing commits to my GitHub repo?”

Permalink - Posted on 2015-12-22 00:00

That's what I thought when I saw "KodekPL" as the author of a commit in my repo... A commit I had just pushed myself seconds before. But why KodekPL and not someone else? Who is this guy?

I’m willing to bet you play Minecraft. A lot of Minecraft. Enough to warrant the need of creating a server. Perhaps one day you downloaded Spigot. “Wait a second here, why are we talking about Minecraft? I’m talking about my GitHub repo here. Someone is in my repo! What the heck!?”, I hear you say.

Turns out, there’s a link between Spigot and KodekPL on GitHub.

Upon further investigation, KodekPL’s GitHub email address is “unconfigured@null.spigotmc.org”. Git, for some reason, picked that up as your email address, hence the wrong author on GitHub. Setting the global variables fixes this.

git config --global user.email your@email.com
git config --global user.name "Your Name"

That’s it. Push something to GitHub and enjoy!


#sherbylove

Permalink - Posted on 2015-11-21 00:00

Just a few months ago, I arrived at my first apartment with my roommates. I was becoming an adult. It's only yesterday however that I realized something: Sherbrooke is now what I consider to be home.

August 22nd, 2015. My first real day in Sherbrooke. LG Nexus 5.

Living in an appartment

When you move out of your parents’ house, the first few weeks are slightly disorienting. There are so many new things to experience, it’s a bit overwhelming! A new city and a new university to discover, new people to know, new systems to grasp, et cetera, et cetera!

For the first nights at my apartment, I had the same feeling we get when we sleep at a friend’s or at the hotel: “Oh, it’s temporary. I’ll be in my real bed real soon”. Except this time, you’re already in your real bed at your real new place. It’s a special feeling, but you just remember that one day, you’ll feel at home here. And indeed, one day it hits you.

Sherbrooke is my home.

It hit me right in the face yesterday afternoon, at the end of my database class. It’s my last class of the week, and when it ends at 1500, that’s usually when I start to think about my weekend, whether I’m going back to my parents’ or I have projects to work or chores to do or a grocery trip to plan.

But yesterday afternoon, I remembered I was staying in Sherbrooke for the weekend and it made me really happy. Not because I don’t want to see my family. Not because I have to stay in Sherbrooke for a project. Just because I feel good here. Sherbrooke has become what I call home.

After a few weeks living in an apartment, you end up creating a daily routine (for your groceries, chores, studies, name it). In time, everything becomes easier.

Mount Bellevue

If you didn’t know already, I really, really like mount Bellevue. I’m lucky to live right next to it and every morning, I walk to university in Mount Bellevue Park. It might seem stupid, but half an hour of walking in the park is great way to wake up fast and feel good for the rest of the day!

In summer, deers show up from time to time, and in fall, the colors in the trees are truly beautiful. As for winter, I just don’t know yet. I’ll tell you about it in a few weeks.

November 20th, 2015. Small waterfall in mount Bellevue, along the way to Université de Sherbrooke. Canon SL1 + 18-55mm.

In addition to being beautiful, the park is packed with installations for recreative activities. It houses a small alpine skiing station, walking trails, mountain bike trails, and I just learned that there are dozens of geocaches all around the park. All in all, as the official park flyer says, it’s the best way to enjoy nature at the heart of the city of Sherbrooke!

November 5th, 2015. Winter is coming! LG Nexus 5.

There is however something I find disappointing. It’s prohibited to ride a bicycle in the park between November and May. Meh. For winter biking lovers (that’s me!), it’s sad. Oh well, at least we can ride our bikes there during the summer I guess.

Campus life

Université de Sherbrooke’s main campus sits above the rest of the city (altitude wise). For starters, it’s pretty epic to be on top on Sherbrooke to go study, but campus life itself is rad. Thursday happy hours are real fun, and so are the many initiatives of the faculties and University itself: I’m thinking about Pedal your smoothie, free hot-dogs under the Sun, Vert & Or games, technical engineering projects, photo and literature contests, and much more!

September 28th, 2015. At the Administration faculty. LG Nexus 5.

You can definitely feel how the University cares about its students, who in turn proudly represent their University. Whether it’s a sticker on their planner or an internship around the world, USherbrooke students are proud.

Even within the University, there’s a feeling of pride between the Faculties. Sure, inter-faculty rivalries are always fun and friendly. Maybe with the exception of science vs. engineering. But yeah. Maybe I’ll write something about that in the future.

November 4th, 2015. Scholarship awards ceremony at the Science faculty. LG Nexus 5.

Living in Sherbrooke is like being part of a huge family. Welcome home. #sherbylove


Pedaling through Quebec’s winters

Permalink - Posted on 2015-05-30 00:00

21st of January 2015. The beast and a glorious sunset. Fuji X100s.

Well, summer's finally arrived in Quebec. It was about time! I consider the winter biking season over. Here is, then, an overview of two winter's worth of cycling through the snow.


When I was younger, I used to bike around the neighborhood, like most children my age would do, and sometimes I’d explore the forest trails near my place.

But only during summer.

That’s right, because summer’s warm and sunny. No way I’m going to ride around if it’s raining. There’s wind, it’s cold, water splashes everywhere… Nope. Not gonna happen.

Nowadays, I bike regularly, not only during summer or for fun, but also to get around. In fact, I use my bike all year round. “Even during winter!?”, ask some in stupor.” Yes sir, even during winter. “You’re crazy. You must get so cold!” Well, I’ll admit it might seem a bit extreme considering Quebec’s unforgiving winters, but have no fear: I don’t get cold. At all. Quite the opposite! After 2-3 minutes of mashing the pedals between ice patches and brown snow puddles, my body warms up and the rest of the ride is enjoyable.

In reality, it’s really not that complicated or extreordinary to bike through snow. To ensure that your winter bicycling experience will be enjoyable, you need at least a safe and working bike, a thin hat you can pop underneath a helmet, a scarf or a neck warmer, and finally some gloves/mittens. That’s it! You could also add lights for your bike, winter boots, studded tires, a snow blower and whatever might help. Seriously though. Depending on how far away from work/school you are, bike commuting in the winter is a realistic alternative for you daily go abouts.

It certainly requires a period of adaption for the first few rides, but trust me, the experience is really unique and worth it. … Maybe I really am crazy somehow. I could ride the bus or even drive around in our family’s car, but nope. I prefer moving around using my very own legs.

7th of March 2015. On my way to the Cégep. Fuji X100s.

You know, there’s something magical about biking around during winter time. It’s partly because you know it isn’t usual, almost weird, that it becomes extraordinary. People look at us as if we’d lost our driving licenses, which is pretty funny. Siliding (and sometimes falling) on ice patches is funny as well (it’s all about the mindset!). Getting stuck in 2 feet of snow is reminder of how powerful Mother Nature really is. Walking next to your bike because winds reach speeds of nearly 70km/h (44 mph) while the temperature’s at -25 degrees Celsius (that’s -13 degrees Fahrenheit) without the wind factor is pretty memorable, thank you very much.

10th of December 2014. That ride was a good workout, oh yes it was. Fuji X100s.

In February of 2014, I wrote about the experience of a very enjoyable morning ride to school:

This morning’s commute especially was magical. -10°C (14°F), which is comfortable. Snowflakes peacefully falling from the pale blue sky. A layer of light snow covering the streets. The muffled crunching sound of rubber tires on snow. Wind respectfully making room for calmness. Whiteness all around.

The sight was splendid. I slowed down, breathed in the chilly winter air and let vapor out of my nostrils. From speeding into the cold waft, my eyes were covered with water: I tightly shut my eyes together, pushing droplets of water down onto my right cheek.

After a few pedal strokes, the core of my body heated up. Warmth began filling my limbs. I felt tingles spread through my legs, eventually reaching the tip of my toes. My cheeks colored pink. […] Winter bicycling is pure fun.

For the last two years, I’ve been riding my bicycle to get to school for the most part. Major snowstorms and other security risks got me to ride the bus or drive the car, just to be safe. How that I graduated from Cégep, I’m off the University of Sherbrooke in a few months.

I’m really looking forward to discovering the city on my bike, all year round!


A Better Way to Journal

Permalink - Posted on 2014-06-23 00:00

Hey, I'm back. It's been a while.

In my list of 12 New Year's Resolutions, I planned on keeping a journal during the month of June. I had written a few entries before, but none of them really added value to my life or seemed to make me think as much as I thought they would. I started writing in my journal on June 1st anyway, thinking maybe this time it would be different.

It was not.

Every entry felt like a waste of time. Unlike a blog, no one other than you can see what you write. "That's boring", I thought. "What's the point of having a journal in the first place?" is the question that led me to a better way to journal.

Introducing the "Short and Sweet if you want it to be" journal. It is mostly inspired from the Five Minute Journal, a simple journal created by two entrepreneurs from Toronto, Canada, who claim that their journal is "the simplest, most effective thing you can do everyday to be happier". And to be honest, they're right.

What I especially enjoy of the Five Minute Journal is the philosophy behing it and the science that explains it. Read more of that in their book.

The Five Minute Journal is simple. Every morning, write down:

  • three things you're grateful for
  • three things that would make today amazing
  • your daily affirmation

Then, every evening, write down:

  • three amazing things that happened today
  • what you could have done to make today better

That's it. I can see why they call it the Five Minute Journal.

While I loved this idea, it felt limiting. Thus, I simply replaced my journaling routine with that of the Five Minute Journal structure. However, nothing stops me from writing longer bouts of thoughts if the desire arises. All in all, it's a minimum of five minutes a day, but it can be more if I feel like it. Five minutes a day is all it takes to build a strong journaling habit.

It really works. It is the simplest, most effective you can do everyday to be happier. After only a week, I felt peace and calmness inside. I now appreciate the little things of life much, much more than before I started journaling. I become a better person, I learn, I grow, every single day.


Just Write

Permalink - Posted on 2014-03-11 00:00

Listen to your thoughts.

Are you really reading this? Are you actually thinking about something else? Do you ever "read" a sentence or a paragraph over and over again without really reading it? You're pointing your eyes on the words yet your brain doesn't compute them and doesn't creating meaning from them.

Stop. Acknowledge your thoughts.

You're thinking about that project you have to hand in tomorrow. Now you're thinking about next week's exam you didn't study for. Now you're wondering what you'll cook for tonight's dinner. Now you're thinking about your boyfriend. Now you're thinking about that pretty girl you saw earlier today. Now you're being jealous of your friends. Now you're fearing of missing out on something.

Breathe. Organize your thoughts.

You're planning your day, or your week to come. Now you're figuring out how you'll get your daughter to her dancing class and your son to his soccer game. Now you're silencing toxic thoughts and focusing on the important things. Now you're pushing anxious feelings aside.


Grab a pen. Write down everything that's on your mind.

You don't really know how to start it off. "Just start", as the saying goes. Write down the first thing that comes to your mind. Anything, really. Start with the one thing that stresses you out the most. Explain it in detail.

After a dozen lines of disorganized bits of thoughts, something happens: your synapses connect and information rushes through your brain. You understand your thoughts, you spot patterns.

A few more lines later, your mind feels clearer. Everything shapes up. You feel at peace. You keep writing.

Half an hour later, you end up with ten pages of long-winded, mostly incoherent sentences that don't make any sense one after the other.

Yet, you place the pen on the table and feel happy. You feel light as air. You're profoundly proud of the text in front of you.


Writing is therapy. When things aren't going as planned, write. When you feel on top of the world, write. When you're alone, write. When you feel down, write. When your mind feels clogged and heavy, ready to explode, write. Let go of the information overdose and write it all down.

Writing really does help. Writing your thoughts down allows you to see the situation from a (till now) completely unknown and unimaginable perspective. That's how journals work: by writing, for example, how today was a bad day, positive elements that happened throughout the day will pop out and soon the day wasn't so bad after all.

It allows for creativity and imagination to bloom. Fiction pieces are also a great way to relieve the stress you keep inside. Perhaps writing a fictional story about that annoying neighbor of yours will alleviate your feelings towards them. Maybe even put a smile on your face.


In conclusion, just write. Anything. Even the most mundane stuff. Turn it into something amazing. Clean up your mind. Free your thoughts.

Write. Even if you only write once. Even if it's just one paragraph. Even if you make mistakes.

Embrace the power of writing. Just write.


Technology Is Not an End (part 2)

Permalink - Posted on 2014-03-05 00:00

Technology and the Internet especially let us work faster and more efficiently than ever before. Thanks to them, we can chat with people from around the world instantly. We can read and learn about anything in a matter of seconds. We can download a full length movie that took years to create... in roughly 10 minutes.

With technology, men walked on the Moon. That same moon you see every night far up in the sky. With technology, we put a rover on Mars. Man, think about it. There is an unmanned machine doing its thing on a planet 200 million kilometers away, on which none of us have ever stepped onto.

To summarize, I'll use this post from Chris Pirillo, which explains why we should embrace technology: "Efficiency, connectivity, productivity, and comfortability. Also, gadgets are cool!!!"

However, as noted in part I of "Technology is not an end", it is far easier to embrace technology but use it the wrong way, wasting our time and the time of others, while not creating anything good. That is because technology is always expanding. There will never be an end to what technology can achieve, especially when it comes to the Internet.

...

There will always be a new YouTube video. A new tweet. A new gadget. A new trend to follow. A new operating system. A new scientific advance. A new everything. It is time to stop using technology (and the Internet especially) as an end. Time to turn it into a means to reach goals and become better.

I'm mostly writing this post for myself, as a way to write down the ideal way I would use technology. These new "rules" and goals will guide the way towards a mindful usage of technology. Here it goes.

I'll have to have a valid reason to use my laptop before turning it on (homework issue, daily Duolingo, writing for this blog). When the task is complete and/or the goal is reached, I will turn the computer off.

I will turn my laptop off completely when not needed (instead of leaving turned on all the time like I currently do). I will then store it in a place where it is not prominent. I may inspire myself from Leo Babauta's technique of breaking the day down in 30-minutes chunks.

I will not use my laptop after 9 o'clock, nor will I take it in bed with me in the morning.

I will not tinker around the looks of my computer. I don't care if it's pretty, I just want it to get things done.

Wikipedia, Duolingo, Codecademy and Khan Academy are my friends.

I will stop site surfing mindlessly. For example, I'm not even registered on Reddit and still visit the damn site multiple times every day. Same goes for Hacker News, Gmail, Wordpress, and my local news website. There's so many things I could do instead of that! When I realize I site surf, I will shut everything down and pick up a book or do some homework.

...

This list is not finite and I may very well expand it later on. I'll admit that I could probably write "Internet is not an end" instead of "Technology is not an end", but still, there are some of my rules that aren't related to using the Internet.

Further reading:

I will now go and read a book. I invite you to do the same.


Technology Is Not an End

Permalink - Posted on 2014-03-04 00:00

Just a few years ago, you would use a computer to create a document, look up information, email a friend, etc. It was a tool. Today, computers, tablets, (smart)phones and the like have become distractions: they've become ends.

We're having more and more trouble focusing and staying on task for extended periods of time: the average attention span of teenagers in 2000 was of 12 seconds. In 2013, it had dropped to 8 seconds. Four less seconds in only 13 years. Give it 13 more years and we're down to a 4-second attention span. (!)

I too experience this: oftentimes, when I want/have to a long essay, I read a few sentences and already my mind wanders to completely unrelated subjects. Even when I write on this blog, I (too) frequently stop writing and do something else that has absolutely no relation with the act of writing itself.

...

When we should be working, we mess around Facebook and Reddit instead. When we should chat in person with "friends", we text other "friends" instead. When we should listen to the professor in class, we play Candy Crush instead. When we should do homework at home, we watch YouTube videos instead (I'm guilty of this one).

Besides, what's interesting is when you get hordes of students that whine about that exam they just failed: they suddenly find interest in talking to the teacher (not with the teacher, mind you). It's one of those rare moments where they look up from their phones and use words to express anger as to why it's the prof's fault.

...

It's 2014 and we're allowing technology to become an end. You may say times change. I say, despite my young age and lack of life experience, that we're not going in the right direction. I know people who are failing classes and repeating grades because they can't stop being on their phones. They don't even use the damn things to communicate, which is their sole and original purpose: they use them for the sake of having them in their hands.

How many people do you see on a daily basis staring at their devices, mindlessly scrolling down a list of status updates and tweets, each more narcissistic and useless than the one before? How often do you see a mother/father on their phone instead of playing with their young child?

...

The principle is not restricted to phones, however: one may very well use their computer as an end (I am guilty of doing this one too). While I could read a book or play music, I decide to tinker around the computer for no apparent reason instead.

Why so? Why use the computer if I'm not using its features to reach a goal (sending an email, completing a project, researching for a paper)? It's as if I were to buy a guitar to hold it in my hands, tune the strings, clean the wooden body, and put it on the stand 15 minutes later. Then, maybe, play it only once or twice a month.

Or the same as if I were to buy a car, a license plate, pay for insurance and a driving license, fill the car with gas, clean it once a week... and keep it parked in the driveway 23 hours a day.

Oh... wait. That's what actually happens with cars.

...

Let's not use technology as an end.
Let's keep it to being a means that allows us to achieve a goal.
This way, we can focus on the important things.


My 12 New Year’s Resolutions

Permalink - Posted on 2013-11-25 00:00

I've always loved reading the number of the upcoming year. In 2012, I loved reading "2013"; in 2011, I loved reading "2012"; and so on. 2014. What a beautiful number. It just feels so fresh! It's full of change, of progress, of improvements, of "I promise, I'll finally do [insert dream here]". It represents a brighter future!

Or does it?

According to time management firm FranklyCovey, 80% of people who make New Year’s resolutions will eventually break them. A third won’t even make it to the end of January. Richard Stallman, professor of psychology at the University of Hertfordshire, estimates that in time, 88% of resolutions fail.

I, my friends, have a plan to escape resolutions failure. You see, there are three reasons why New Year's resolutions fail:

1. They are too vague

This one is a big deal. Saying "I will exercise more in 2014" or "I'll quit smoking" to your aunt at 11:54pm on December 31st simply won't cut it. You must be precise and prepare yourself.

Set goals, such as "I will run a 5K at [insert local event here], which is 6 months away, and will run it in less than 32 minutes". Now that is going to get you off the couch and motivate you. You may ask friends and family to hold you accountable, which will motivate you even more.

2. They are not exciting

It's all fun and games until real challenges come along and block your way to success. On the surface, New Year's resolutions may seem easy to achieve, even fun, while in reality, it's just a matter of hard work. No wonder why most of them fail.

That's why you have to turn the resolutions into something that's actually fun, like a challenge. Get your friends onboard: for example, the person who loses the most weight over a particular time span wins a prize, courtesy of the others. Or another example: the person who (really) reads the most books during the year to come (with supporting evidence) wins a library membership.

3. They don't directly improve our lives

Offering time to volunteer is a popular New Year's resolution. I'm also pretty sure it's one of those that gets left behind the most, because it doesn't really change something, concretely, in your very own personal life. However, volunteering just... feels good.

“When we do good deeds,” [bioethicist Stephen G.] Post says, “we’re rewarded by a dopamine pulse. Giving a donation or volunteering in a food bank tweaks the same source of pleasure that lights up when we eat or have sex. It’s clear that helping others, even at low thresholds of several hours of volunteerism a week, creates mood elevation.”

That's just one example. There are many many more, such as learning something new (who knows when you'll need to play the guitar at the bonfire?), getting organized (less clutter, less stress, more time), reading more... Just pick one!

Actually... how about picking many resolutions? How about 12? Yeah, twelve. That seems about right. Oh and look at that, there are twelve months in a year. How convenient!

By focusing on one resolution every month, it turns them into habits that will stick with you for the rest of your life. That, my friends, is how you'll actually stick to your resolutions: by turning them into habits.

My personal list

The following is a list of habits I decided to implement in my life. You may inspire yourself from this list or just downright steal it. It's up to you.

January

Exercising. I already bike to college every day of the week (strong glutes, yo!), so I'll focus on core and upper body exercises. With the doorway pull-up bar I received for my birthday, I'll follow a weekly schedule inspired from Nerd Fitness (brilliant health blog!) and make sure I don't strain the same muscles day after day.

February

“Books are a uniquely portable magic.”
— Stephen King

Reading. I used to read a lot in elementary school. 12-books series, collections and sure enough, lots and lots of comics. In high school, the number of books I read can be counted on only two hands. And they were all forced reads. February of 2014 will change that. I have already chosen a collection of books that seems interesting and I will make sure I have a list of individually curated books for which I'll set time apart to read. A comfy chair, silence... and a book.

March

Writing. I will write 500 words every day and publish them right here, on this blog, as part of my "Daily 500" challenge. 750 words is a tad too much for me, especially if I want to stick to the habit. I don't know what I will write about, nor the kind of content I'll create. Stay tuned for March 2014!

April

Learning Spanish. I've been learning Spanish since 2010 and continue growing my knowledge of the language with Duolingo, an awesome (and free!) online service on which you can learn French, Spanish, Italian, and German, to name a few. I want to reach and master the "Verbs: Past" skill, which is 10 skills away from where I am now. Join me on Duolingo!

May

Meditating. In the past year, I kept stumbling upon the many benefits of meditation everywhere and anywhere on the web. May is the perfect month for creating this calming habit since college finals are in the last weeks of this month. I'll need to calm my mind and keep stress levels low, I'm pretty sure. Planning is key. Bring it on, finals.

June

Journaling. Just like meditation, journaling has many well-known benefits. From efficient problem-solving to knowing yourself better to seeing the world differently to clarifying your thoughts, daily journaling is a very healthy habit to adopt.

I use a small (and very thin) cardboard Moleskine my mom had but never used. In June, I'll journal my goals, my daily thoughts, my aspirations, and in December, I will add another component.

July

Cold showers. Waking up and feeling groggy doesn't make for a terrific summer. Let's fix that with a cold morning shower, complete with drying under the sun. I'll feel refreshed and ready to begin my day.

August

Decluttering. Every day of August, I'll purge one item I own (or more). I'll have a cleaner room, a cleaner mind and less to worry about.

September

Drinking three liters of water a day. I already drink two liters of water everyday (although sometimes I only feel like one and a half). My habit goal for September will be to drink from two to three liters of water, every single day. Water brightens the skin, purifies the body and cleanses the kidneys, all while rehydrating you and keeping you alive. Thanks water.

October

Flossing. Ah, flossing. The only activity we all seem to forget, even after our dentist tells us to floss daily. Time to print a calendar, stick it on the bathroom cabinet door and check the days you flossed (this technique is also known as Jerry Seinfeld's "Don't Break The Chain").

November

Mindfulness. Awareness and attention are two prized resources that are constantly being drained by the distractions that surround us. For the eleventh month of 2014, I'll force myself to be mindful in everything I do. I'll wash my bowl. I'll chew slowly on food and appreciate it. I'll listen to my thoughts. I'll walk more slowly and enjoy what's around me.

December

Gratitude. Continuing June's journaling habit, I'll add gratitude in my journal. Every evening, after dinner most probably, I'll write the three things I'm most grateful for that day. Here's an example day: 1) I'm grateful for the reciprocated smile from that kind stranger on the street. 2) I'm grateful for the delicious dinner I had with my family. 3) I'm grateful for my eyes, which allow me to see the world.

...

I invite you to join me in this yearly life-improving challenge. The more, the merrier! Let's become better human beings in 2014... and beyond!

What are your New Year's resolutions for 2014? Did you ever succeed at a resolution? How do you cope with difficulties that come along and block your way?


College, Smartphones, and Becoming Better

Permalink - Posted on 2013-10-22 00:00

Let me be straightforward with you. I’m a college student and I recently started to dislike smartphones. I’m also stepping away (partly) from technology in general. Here’s why.

I think it’s safe to say that most college students own a smartphone. I don’t have exact numbers, but from what I see on campus, I estimate that easily 80% of college students own a smartphone. Not just a cellphone (which 99% of students have), a smart phone. Anywhere on campus, you’re almost sure to see a blueish glow shining on a few faces.

Walking to class has become a dangerous adventure (huh. while this may not be true, the following events however, I have seen myself, and are 100% true). You have to go through:

  • Smartphone users hitting into each other (!) because they try to use their phone and walk at the same time;
  • People standing still in the middle of hallways, unable to keep walking because they focus on their phone, as if walking suddenly dropped to the bottom of their priorities;
  • People walking very slowly because of reasons said above;
  • Crowds of people blocking the hallway, all looking at a single four inch screen, watching not-so-funny Vine videos and Snapchat pictures.

If you’re still alive by now, congratulations. You are now in class, waiting for the professor to arrive. But wait! It’s not over. You are now surrounded by more smartphone users, watching stupid YouTube videos, playing Candy Crush and… texting. Nearly all people around you look down on their phone, completely ignoring what’s around them.

After being in class for an hour, during which everyone got disturbed by someone accidentally activating Siri in their pocket and by another who received a phone call (bzz, bzz), the teacher allows for a ten-minute break. First thing to do? Smartphones out everyone! Same thing when the class ends. People don’t even bother putting their binders and notebooks in their bags first, they take out their damn phones.

Later you want to have lunch with your friends. Uh-huh, no meaningful conversation for today because we have to talk about the latest Facebook posts! And the best tweets too! “Oh my god, oh my god! You HAVE to see this, it has like, two hundred and twenty likes!”

I want to break away from all this. I, as a human being, was not meant to have my eyes glued to a glowing screen while I could be doing something meaningful. Although social media is all the rage these days, it seems as though people are lonelier and farther from each other than they ever were.

I just find it plain sad to see a couple both staring at their own phone during dinner, barely exchanging a few words over their meals. Same thing goes for people at concerts, looking at the show through a phone screen. Look at the dang show, it’s right there in front of you in full HD retina vision! Experience it, feel it!

Smartphones are distraction generators. No more procrastinating on that project or assignment. Similarly, mindless phone usage happens when you’re waiting in line at the bank, for example. What do you do? You pull it out of your pocket, fiddle around Facebook, update Twitter, etc. Same thing goes in your car when stuck in traffic (which, in fact, is you: you are traffic). Louis CK, in an interview on Conan, puts it best:

You need to build an ability to just be yourself and not be doing something. That’s what the phones are taking away, is the ability to just sit there. That’s being a person.

Human beings are meant to connect with each other, to experience life with all it has to offer by itself, and to become better. I am meant to change the world, in any way possible. So are you. Aren’t we all? This is the main reason behind me feeling that I need to ditch technology a bit, to kick it off my life for a while. I’m just trying to regain my life back. Enough of being isolated on a technology: time to wake up and live.

Here’s how it’s going so far:

  1. I deleted my Facebook account last June and haven’t looked back. I had deleted my first Facebook account about two years ago, but had to come back for team assignments needs. People can meet me in person, email me, or… call me. Calling, in my opinion, is a hundred times better than texting. You exchange more information in much less time and you actually hear a person’s voice with its emotions and intonations, which is completely different from a text message filled with emoticons.
  2. I sold my Galaxy Nexus and use a Nokia 5130 instead. The battery lasts a full week! Makes calls perfectly and is perfect for the occasional text message.
  3. I switched to a paper planner and a Moleskine journal. Bye bye Google Tasks and Google Calendar.

What’s next from here?

  • No technology in my bedroom, which will replace the mindless Reddit browsing with meditation, creative entertainment (guitar, piano, drawing), and writing.
  • I have a list of new habits to implement in my life, and a list of bad habits to ditch.

Isn’t that what life is all about? Becoming better and touching other people’s lives for the best?