Browse through the showcased feeds, or enter a feed URL below.
Recent content on Christopher Ng
Permalink - Posted on 2020-07-05 17:45
Just yesterday, I finished my first ever Capture The Flag (CTF) competition: CDDC 2020, organised by DSTA. I wanted to write up a quick post sharing my thoughts on this competition, partly because I can’t sleep thanks to 3am-bedtimes the past two nights.
Before this, I had a little experience with basic reverse engineering and other similar stuff. Taking part in my first real competition in this field, however, proved to be quite a different experience.
Prior to the actual day, there was a “training” phase that lasted around three weeks. I liked this idea, and believe it is helpful for the inexperienced: perhaps DSTA was hoping to “foster interest” in cybersecurity amongst Singaporean youths.
The actual competition itself spanned 48 hours, and took place remotely. Challenges were gated into six tiers, “warp gates” 5–0, each locked behind solving the previous tier. Overall, most of the challenges in the first three tiers (my limit) seemed to be standard CTF stuff: buffer overflow, disassembly, format string exploits, packet sniffing (Wireshark) and steganography, to name a few.
I was excited to apply what I learnt about chosen-plaintext attacks (AES ECB) in one of the challenges, “Hello” (gate 3): solving that gave me quite a sense of accomplishment. Steganography and Wireshark challenges were also new to me, though I have heard of them before, and I felt I learnt a fair bit from this experience.
I was a little disappointed that there was no “last hurrah” allowing unfettered access to all tiers: I had hoped to explore the “higher-level” challenges, like the ones on Active Directory, even if I would not have been able to solve them.
Generally, I felt most of the challenges I tried were designed alright. The tiers appear to be gated fairly well (or at least the first three). Some challenges, unfortunately, seemed to involve a large degree of guesswork or prior experience, especially those involving esoteric languages: “What Time Is It 2” immediately comes to mind.
Overall, I was happy to have had the chance to participate in an actual CTF competition. I would have liked an official “post-mortem” explaining how to solve the more difficult challenges, and some kind of archive of challenge files (docker images?) so that I can learn and try them out again in my own time. Luckily, there are already some write-ups from other participants, like this post by Justin Ong. Hopefully, more write-ups would be released by others, especially for the more difficult challenges.
Still, I think I learnt a good deal from this experience, and would definitely seek out more CTF competitions in the future. 😁
Permalink - Posted on 2020-05-14 00:00
In recent years, dark mode on displays has become the in-thing. With major OS vendors offering a system-wide dark mode option, and an ever-increasing number of applications and sites choosing to bring dark mode front-and-centre, there is little doubt on its popularity with consumers.
Advocates for dark mode often tout several claims in its favour:
Personally, however, I have always been more partial towards light mode: using it where offered, except at night where I set my smartphone to automatically activate night mode. And though it may not have been the “better choice”, I chalked it up to my personal preference.
Yet recently, I came across this article by Kev Quirk, sharing some information he discovered regarding the oft-touted benefits of dark mode. And I realise that perhaps I am not the only one who believes light mode to be generally superior. Thus, I was inspired to try and find out a little more.
It turns out, studies have been conducted on the relative legibility of text on interfaces with positive polarity (black text on white backgrounds) versus negative polarity (white-on-black).
A few minutes of searching online surfaced several articles by different sites discussing the issue of dark-vs-light. Of note, an article by Adam Engst for TidBITS cites two journal articles, one from 2013 in Ergonomics, and another from 2017 in Applied Ergonomics. As the article’s author, Adam Engst, summarised (emphasis mine):
… a dark-on-light (positive polarity) display like a Mac in Light Mode provides better performance in focusing of the eye, identifying letters, transcribing letters, text comprehension, reading speed, and proofreading performance, and at least some older studies suggest that using a positive polarity display results in less visual fatigue and increased visual comfort. The benefits apply to both the young and the old …
In addition, another article by Adamya Sharma for Android Authority raises a different point: for people with astigmatism1, light mode interfaces are easier to read. This is due to the increased light levels lessening the “deformative effects” from astigmatism.
As such, it appears that the popularity of dark mode may not truly be due to the touted benefits. As Lilly Smith writes for Fast Company:
… in regard to usability and legibility, dark mode isn’t actually better. Rather, it may be more indicative of a “bigger minimalism trend,” as Budiu puts it, or simply the increasing personalization of user interfaces overall.
More recently, Nielsen Norman Group has also released an informal literature review that cites the aforementioned two journal articles, and brings in some additional research.
One point raised was the potential for long-term effects of sustained reading via light mode interfaces. According to a study published in Nature Research’s Scientific Reports in 2018, prolonged exposure to light-mode may be associated with myopia (sample size: 7). Of course, further studies need to be done before a conclusion can be reached, but the research indicates there is such a possibility.
Furthermore, Nielsen Norman Group’s article highlights the results of another study from 1985, by Gordon Legge and colleagues at the University of Minnesota: individuals with cloudy occular media2 — most commonly caused by cataract — fared better when reading with dark mode interfaces.
The answer to this depends on the display in question.
Most displays in the market are liquid crystal displays (LCDs). Such displays require the use of a backlight to produce a visible image, as they do not produce light themselves. Many displays marketed as LED screens are, in fact, LED-backlit LCDs. Consequently, whether the content is dominated by white or black, the backlight stays on and continues consuming energy. In fact, a post by Bill Weihl on Google’s official blog suggests that displaying black on flat-panel monitors may actually consume more energy.
Nowadays, however, higher-end phones and displays utilise a newer techology known as Organic LED (OLED). In such screens, each pixel is responsible for both colour and brightness: there is no longer need for any backlight. With such devices, the use of dark mode does lead to considerable reduction of energy consumed, according to this article by Kevin Purdy for iFixit.
For individuals with certain medical (eye) conditions, the debate between light and dark mode may not be clear cut.
But for the average person with normal vision, light mode appears to beat dark mode, with regards to performance: it leads to less visual fatigue and better overall reading performance compared to dark mode.
As for energy consumption, it depends on the display in use: many higher-end phones come with OLED/AMOLED displays, and dark mode is useful for such devices. But on lower-end phones, and most desktop/laptop monitors, there is no difference in energy consumption since LCD/LED displays are used.
At the end of the day, the dark-mode hype appears to mostly be a matter of personal preference, and that is a perfectly valid reason. Personally, I believe I will stick to using light mode, as I am more comfortable with it.
Permalink - Posted on 2020-02-20 14:14
When I was starting on this site, I tried looking for pre-built themes that I could quickly modify to suit my own tastes. In the end, I was not fully satisfied with any that I found, and decided to roll my own. Having only lightly dabbled in HTML+CSS, the Hugo documentation and MDN web docs quickly became my best friends.
Fast-forward a couple of weeks, and my site was probably 80% done. But following that, I fell into the unfortunate trap of obsessing over unnecessary details:
Pursuing these micro-optimisations gave me a sense of achievement, letting me feel like I am going above and beyond in squeezing out every last drop of performance from my site. But after the fact, I have come to the realisation that these are hardly useful for a small, personal blog.
Premature optimization is the root of all evil — Donald Knuth1
And indeed, the time spent obsessing over these small details was almost definitely not worth the paltry (if any) resultant performance gain. After all, when adding a single JPEG image brings with it ~150 KB of additional payload, saving ~2 KB hardly seems to matter.
Looking back, I feel that a major reason for my premature optimisation is that I lost track of what my site is supposed to be: my personal corner of the Internet. Being a small platform-for-one, this is no top-500 website that sees thousands of visitors a minute, and many of those “SEO best practices” are just not necessary for a simple static blog.
Another reason for my premature optimisation was due to me trying to rationalise delaying going live: by “working on improving” my site, I could tell myself that it is okay to not go public yet. Personally, this was due to me being uncertain if my site is good enough, though there could be many other reasons one would try to (unnecessarily) delay going live.
Here, I feel that the best strategy is to just carry on: a product will never be perfect, and there will almost definitely be some edge case in the real world that we failed to micro-optimise for.
Finally, I actually enjoyed obsessing over the small details: it made me feel like I am making something really special, something better than what is out there. In my case, I can afford the time spent fussing over the small details, since this is done in my own free time. But for non-hobby work, developers may not be able to afford wasting time chasing down trivial gains.
In the end, I actually do not regret micro-optimising my site. I believe that I have learnt a fair deal through this experience, and that is definitely valuable. Nonetheless, I also believe it is important to recognise when we start worrying about unnecessary optimisations, and to prevent ourselves from falling into the trap.
From Knuth’s book The Art of Computer Programming. ↩︎
Permalink - Posted on 2019-12-09 20:43
When deploying Docker containers, the official docs recommends the use of named volumes over bind mounts. Yet, for many containers, the example setup configuration tends to feature bind mounts. It’s hard to blame them, though: bind mounts are a lot more intuitive than volumes, named or otherwise. In this post, I would like to briefly share my thoughts on some arguments related to choosing between the two.
This is just a really brief recap of what bind mounts and volumes are. If you’re looking for more, it’s best to consult the official docs directly.
Bind mounts are mappings between a manually-specified directory or file on the host, and the guest (container). They are declared in the form
/path/on/host:/path/on/guest, with the colon differentiating between host and guest.
Volumes are storage directories created and managed by Docker, e.g. via the
docker volume create command. Similar to bind mounts, they can be attached to containers in the following form:
There are two types of volumes, anonymous and named. This post primarily makes reference to named volumes.
The official Docker docs has this to say regarding the benefits of volumes:
Volumes are easier to back up or migrate than bind mounts.
It then goes on about how you can easily do so:
tarthe contents of the volume to the bound directory
In comparison, backing up bind mounts involves the following step:
tarthe contents of the bind mount
In my opinion, it’s clear that backing up bind mounts is a less involved process. And restoring from backups is easier too, as it basically involves the same set of steps, just extracting instead of archiving.
Granted, it isn’t very difficult to spin up a simple Alpine or Ubuntu container for backing up files. Still, I believe it’s probably safe to say that compared to volumes, bind mounts are easier to backup and restore.
For the same reasons bind mounts are easier to backup, it is also easier to modify files which are shared with containers. That means it is easier to reference files which may need to be modified by users, e.g. configuration files, via bind mounts than using volumes.
Because volumes are fully controlled by the Docker daemon, some operations become simpler/possible. For example, it’s possible to manage volumes via the Docker CLI and Docker Engine API, though at the time of writing, there are very few commands available:
It’s also possible, on Linux hosts, to specify driver options for the built-in
local volume driver. Thus, one can directly mount NFS shares via Docker Engine, without having to first ensure it is mounted on the host. This can simplify the sharing of volumes across multiple hosts (e.g. in a Swarm configuration).
For certain applications, it almost never makes sense to have direct access to the raw files (e.g. databases). In such situations, the isolation Docker enforces on volumes can be a plus: it is able to help reduce the odds of accidental modification, be it by users or other programs.
At its heart, volumes are really Docker-controlled bind mounts. They exist, by default, under
/var/lib/docker/volumes, and are not meant to be directly modified/controlled by users. Otherwise, both are still subject to the host filesystem.
When I was just starting out with Docker, and containers in general, I had difficulties wrapping my head around why anyone would prefer using volumes. And it turns out that for simple use-cases like self-hosting various services for personal (friends and family) use, it doesn’t really matter.
Of course, if you’re deploying containers for business applications, you would definitely want to conduct a proper analysis of your system requirements. Otherwise, if you’re just a “casual sysadmin”, I believe the convenience bind mounts offer makes it a compelling choice. And if the data needs to be accessed by multiple containers (ala Docker Swarm), consider using volumes.
TL;DR: for small, single-machine setups, it probably doesn’t matter — okay to use bind mounts for their convenience. If deploying across multiple machines (via Swarm and co), volumes are likely more suitable.
Just avoid persisting data directly in a container’s write layer.
Permalink - Posted on 2019-11-30 14:20
Traditionally, every programming guide begins with an introduction to Your First Program™: the “Hello, World!” program. This post is basically the equivalent of that: commemorating my foray into the world of blogging.
To be honest, my Time to Hello World is something like eight months: I first started drafting this site in my head around April 2019, when I got bored while studying for my finals.
Since then, I have worked on this site on and off, tweaking the theme incessantly and obsessing over unnecessary details. In other words: I had too much fun messing with my tech stack, and did not actually post anything (the whole point of a blog).
I am a little nervous about creating a blog, and especially tying it to my identity, but I hope that publishing this first post will “get the ball rolling” and make future posts easier.
Thanks for visiting, and please enjoy your stay. 😄
– Christopher Ng