What is a JSON feed? Learn more

JSON Feed Viewer

Browse through the showcased feeds, or enter a feed URL below.

Now supporting RSS and Atom feeds thanks to Andrew Chilton's feed2json.org service

Getting Firefox Nightly working nice with GNOME

Permalink - Posted on 2017-10-12 16:26, modified at 16:28

I love Firefox. I use [Firefox Nightly](https://nightly.firefox.org) as my daily driver. Recent changes with how Firefox uses its profiles has meant I’ve now uninstalled the stable release of Firefox I had installed with my package manager. The other motivation to get Firefox Nightly to play nice with GNOME was the new nightly logo. ![Firefox Nightly logo](https://upload.wikimedia.org/wikipedia/commons/thumb/5/5c/Firefox_Nightly_Logo%2C_2017.png/465px-Firefox_Nightly_Logo%2C_2017.png) *hnnng* To install Firefox Nightly I donwloaded the latest release and extracted it to `/opt/firefox`. I then made the `firefox` folder owned by my user account so firefox could update itself. GNOME uses `.desktop` files to populate its launcher and dock. I created this file and saved it to `~/.local/share/applications/firefox.desktop`: ``` [Desktop Entry] Version=1.0 Name=Firefox Comment=Browse the World Wide Web Icon=/opt/firefox/browser/icons/mozicon128.png Exec=/opt/firefox/firefox %u Terminal=false Type=Application Categories=Network;WebBrowser; Actions=PrivateMode;SafeMode;ProfileManager; [Desktop Action PrivateMode] Name=Private Mode Exec=/opt/firefox/firefox --private-window %u [Desktop Action SafeMode] Name=Safe Mode Exec=/opt/firefox/firefox --safe-mode [Desktop Action ProfileManager] Name=Profile Manager Exec=/opt/firefox/firefox --ProfileManager ``` Hopefully this useful to others.


Installing PostgreSQL v10

Permalink - Posted on 2017-10-10 14:29, modified at 14:31

I want to install the latest version of PostgreSQL on my server. The first thing to do is to backup my database from my current installation. ```shell $ pg_dump -U username database_name > backup.sql ``` From here I can compile and install the latest version of PostgreSQL, then restore my previous data. ## Setup Let’s get a working directory ```shell $ mkdir postgresql && cd postgresql ``` And install the necessary dependencies ```shell $ sudo apt install build-essential libreadline6-dev zlib1g-dev libssl-dev libxml2-dev libxslt1-dev libossp-uuid-dev libsystemd-dev libproj-dev libpcre3-dev libjson-c-dev ``` PostgreSQL also needs a dedicated user account. ```shell $ sudo adduser --system postgres ``` ### download postgresql ```shell $ curl -O https://ftp.postgresql.org/pub/source/v10.0/postgresql-10.0.tar.bz2 $ curl -O https://ftp.postgresql.org/pub/source/v10.0/postgresql-10.0.tar.bz2.sha256 $ shasum -a 256 postgresql-10.0.tar.bz2 $ cat postgresql-10.0.tar.bz2.sha256 # check values match ``` ### download postgis ```shell $ curl -O http://download.osgeo.org/postgis/source/postgis-2.4.0.tar.gz ``` ### download geos `geos` is a dependency for PostGIS. ```shell $ curl -O http://download.osgeo.org/geos/geos-3.6.2.tar.bz2 ``` ### download gdal `gdal` is another dependency for PostGIS. ```shell $ curl -O http://download.osgeo.org/gdal/2.2.2/gdal-2.2.2.tar.xz $ curl -O http://download.osgeo.org/gdal/2.2.2/gdal-2.2.2.tar.xz.md5 $ md5sum gdal-2.2.2.tar.xz $ cat gdal-2.2.2.tar.xz.md5 # check values match ``` ## Installing Now we can install PostgreSQL and its dependencies. ### PostgreSQL ```shell $ tar -xf postgresql-10.0.tar.bz2 $ cd postgresql-10.0 $ ./configure --with-openssl --with-systemd --with-uuid=ossp --with-libxml --with-libxslt --with-system-tzdata=/usr/share/zoneinfo $ make $ sudo make install $ cd .. ``` ### GEOS ```shell $ tar -xf geos-3.6.2.tar.bz2 $ cd geos-3.6.2 $ ./configure $ make $ sudo make install $ cd .. ``` ### Gdal ```shell $ tar -xf gdal-2.2.2.tar.xz $ cd gdal-2.2.2/ $ ./configure --with-liblzma=yes --with-pg=/usr/local/pgsql/bin/pg_config $ make $ sudo make install $ cd .. ``` ### PostGIS ```shell $ tar -xf postgis-2.4.0.tar.gz $ cd postgis-2.4.0 $ ./configure --disable-gtktest --with-pgconfig=/usr/local/pgsql/bin/pg_config $ make $ sudo make install $ sudo ldconfig $ cd .. ``` ## Configuring The last line from above, `sudo ldconfig`, makes sure the `.so` files are linked to and loaded correctly, in particular the postgis `.so` files. Then when we try and get PostgreSQL to load a module, it’ll find the right file. To initiate PostgreSQL change to the `postgres` user and initiate the db, we also need to set the right permissions on the folder used for the db: ```shell $ sudo mkdir /usr/local/pgsql/data $ sudo chown -R postgres /usr/local/pgsql/data $ sudo chmod 700 /usr/local/pgsql/data $ sudo su postgres postgres@hostname:/home/user/postgresql$ /usr/local/pgsql/bin/initdb -E UTF8 -D /usr/local/pgsql/data ``` ### Startup Let’s use systemd to start and stop PostgreSQL. Create a new service file at `/etc/systemd/system/postgresql.service`: ``` [Unit] Description=PostgreSQL database server After=network.target [Service] Type=forking TimeoutSec=120 User=postgres ExecStart= /usr/local/pgsql/bin/pg_ctl -s -D /usr/local/pgsql/data start -w -t 120 ExecReload=/usr/local/pgsql/bin/pg_ctl -s -D /usr/local/pgsql/data reload ExecStop= /usr/local/pgsql/bin/pg_ctl -s -D /usr/local/pgsql/data stop -m fast [Install] WantedBy=multi-user.target ``` Then start and enable the PostgreSQL server: ```shell $ sudo systemctl start postgresql $ sudo systemctl enable postgresql ``` Now we setup a new database as the `postgres` user using `psql`: ```shell $ sudo su postgres postgres@hostname:/home/user/postgresql$ /usr/local/psql/bin/psql postgres=# CREATE ROLE username WITH LOGIN PASSWORD 'password'; postgres=# CREATE DATABASE database_name WITH OWNER username; postgres=# \q ``` Then, whilst we are still the `postgres` user load the backup file: ```shell postgres@hostname:/home/user/postgresql$ /usr/local/pgsql/bin/psql -d database_name --set ON_ERROR_STOP=on -f backup.sql ``` ## Further Updates When a minor version it released, e.g. from 9.6.2 -> 9.6.3, updating is very easy. Compile the new source code, stop `postgresql.service`, and run the `sudo make install` command. Then you can simply start the `postgresql.service` again, as the data structure will remain compatible. Major updated are more involved. We need to dump our database as at the start of this article. Then move the `/usr/local/pgsql` folder to a backup location. Then install as the article describes.


Updating an Eloquent model to use timestamps

Permalink - Posted on 2015-07-08 20:40, modified at 20:43

When I upgraded my website to use Laravel I already had a database with blog posts. As part of the schema one of the columns was called `date_time` and was of type `int(11)` and contained plain [unix timestamps](https://en.wikipedia.org/wiki/Unix_time). To speed up migrating my code base to Laravel I kept this column and set my articles model to not use Laravel’s timestamps. I’ve now got round to updating my database to use said timestamps. Here’s what I did. First I created the relavent new columns: [sql] > ALTER TABLE `articles` ADD `created_at` TIMESTAMP DEFAULT 0; > ALTER TABLE `articles` ADD `updated_at` TIMESTAMP; > ALTER TABLE `articles` ADD `deleted_at` TIMESTAMP NULL; I needed to add `DEFAULT 0` when adding the `created_at` column to stop MariaDB setting the default value to `CURRENT_TIMESTAMP` as well as adding an extra rule of updating the column value on a row update. Then I needed to populate the column values based on the soon to be redundant `date_time` column. I took advantage of the fact the values were timestamps: [sql] > UPDATE `articles` SET `created_at` = FROM_UNIXTIME(`date_time`); > UPDATE `articles` SET `updated_at` = FROM_UNIXTIME(`date_time`); Now I can delete the old `date_time` column: [sql] > ALTER TABLE `articles` DROP `date_time`; Next I had to get Eloquent to work properly. I wantet `/blog` to show all articles, I wanted `/blog/{year}` to show all articles from a given year, and I wanted `/blog/{year}/{month}` to show all articles from a given month. My `routes.php` handled this as `Route::get('blog/{year?}/{month?}', 'ArticlesConrtoller@showArticles');`. I then defined a query scope so I could do `$articles = Articles::dates($year, $month)->get()`. Clearly these variables could be null so my scope was defined as follows: [php] public function scopeDate($query, $year = null, $month = null) { if ($year == null) { return $query; } $time = $year; if ($month !== null) { $time .= '-' . $month; } $time .= '%'; return $query->where('updated_at', 'like', $time); } The logic takes advantage of the fact I know that `$year` can’t be `null` whilst simultaneously `$month` be not-`null`. i.e. when there is no year just return an unmodified query. And now my blog posts are handled by Laravel properly.


Hardening HTTPS with nginx

Permalink - Posted on 2015-04-14 15:35

I’ve improved my HTTPS setup with nginx recently. For a start I’ve organised the files better. For a *TL;DR* I’ve put the pertinent files [on GitHub](https://github.com/jonnybarnes/nginx-conf). First I have `conf/nginx.conf`, the main configuration file, which defines lots of mundane non-security related things. Then the penultimate directive is: `include includes/tls.conf;` This defines the various TLS rules globally. In particular this allows the session cache to be shared amongst several virtual servers. Let’s take a look at what else is done here: [bash] # Let’s only use TLS ssl_protocols TLSv1.1 TLSv1.2; # This is sourced from Mozilla’s Server-Side Security – Modern setting. ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK'; ssl_prefer_server_ciphers on; # Optimize SSL by caching session parameters for 10 minutes. This cuts down on the number of expensive SSL handshakes. # The handshake is the most CPU-intensive operation, and by default it is re-negotiated on every new/parallel connection. # By enabling a cache (of type \"shared between all Nginx workers\"), we tell the client to re-use the already negotiated state. # Further optimization can be achieved by raising keepalive_timeout, but that shouldn't be done unless you serve primarily HTTPS. ssl_session_cache shared:SSL:10m; # a 1mb cache can hold about 4000 sessions, so we can hold 40000 sessions ssl_session_timeout 24h; # SSL buffer size was added in 1.5.9 ssl_buffer_size 1400; # 1400 bytes to fit in one MTU # Use a higher keepalive timeout to reduce the need for repeated handshakes keepalive_timeout 300; # up from 75 secs default # SPDY header compression (0 for none, 9 for slow/heavy compression). Preferred is 6. # # BUT: header compression is flawed and vulnerable in SPDY versions 1 - 3. # Disable with 0, until using a version of nginx with SPDY 4. spdy_headers_comp 0; # Diffie-Hellman parameter for DHE ciphersuites # `openssl dhparam -out dhparam.pem 4096` ssl_dhparam includes/dhparam.pem; As you can see, I don’t support any version of SSL. It’s [insecure](http://disablessl3.com). I’ve also dropped support for TLSv1. I’m still undecided on that. Remember you are going to need to generate your own `dhparam.pem` file. This command can take a *long* time. In each included virtual server I further include two other files, `stapling.conf` and `security-headers.conf`. The first file is very self explanatory and simply enables [OCSP Stapling](https://en.wikipedia.org/wiki/OCSP_stapling). As far as I can tell for nginx, if you use virtual servers, one of them needs to be designated a `default_server` and at least this one needs stapling enabled in order for stapling to work for any other virtual server. Feedback on this point is welcome. The second file, `security-headers.conf`, is where I improve the security of the sites using several HTTP headers. I’ve been particularly inspired by [securityheaders.io](https://securityheaders.io). Let’s take a look: [bash] # The CSP header allows you to define a whitelist of approved sources of content for your site. # By restricting the assets that a browser can load for your site, like js and css, CSP can act as an effective countermeasure to XSS attacks. add_header Content-Security-Policy \"default-src https: data: 'unsafe-inline' 'unsafe-eval'\" always; # The X-Frame-Options header, or XFO header, protects your visitors against clickjacking attacks. add_header X-Frame-Options \"SAMEORIGIN\" always; # This header is used to configure the built in reflective XSS protection found in Internet Explorer, Chrome and Safari (Webkit). # Valid settings for the header are 0, which disables the protection, 1 which enables the protection and 1; mode=block which tells # the browser to block the response if it detects an attack rather than sanitising the script. add_header X-Xss-Protection \"1; mode=block\" always; # This prevents Google Chrome and Internet Explorer from trying to mime-sniff the content-type of a response away from the one being # declared by the server. It reduces exposure to drive-by downloads and the risks of user uploaded content that, with clever naming, # could be treated as a different content-type, like an executable. add_header X-Content-Type-Options \"nosniff\" always; This is unashamedly copied from [Scott Helme](https://scotthelme.co.uk/hardening-your-http-response-headers/). There are two more headers I use, but these are used on a site-by-site basis and are thus done in the virtual servers files themselves. This is because once we use these headers we can’t really go back to having a non-https version of the site. You can see them in the `sites-available/https.jonnybarnes.uk` file. They are the HSTS and HPKP headers. HSTS is easy, it just tells the browser to only use `https://` links for the domain. This is cached by the browser, and can even be [pre-loaded](https://hstspreload.appspot.com/). HPKP is a little more involved. The idea with HTTP Public Key Pinning is to try and stop your site being the subject of the Man-in-the-Middle attack. In such an attack a different certificate than yours is presented to the user. In particular the public key included in the certificate is not the associated public key to my private key. What HPKP does is take a pin, or hash, of the public key and transfer that information in a header. This value is then cached by the user’s browser and any subsequent connections the browser checks the provided public key matches this locally cached pin. For fallback purposes a backup pin must also be provided. This backup pin can be derived from a public key contained in a CSR. In particular this CSR needn’t have been used to get a signed certificate from a CA yet. [Scott Helme](https://scotthelme.co.uk/hpkp-http-public-key-pinning/) has an excellent write-up of this process. Given either of your current site certificate, or CSRs for future certificates it’s simply a few `openssl` commands to get the relavent base64 encoded pin. Then a [single `add_header`](https://github.com/jonnybarnes/nginx-conf/blob/2f9c5c0fa6f2e6e7958b40beb928e1973cc5c225/sites-available/https.jonnybarnes.uk#L31) directive in nginx. The end result of all this is a more secure site, hopefully. One issue to note for now is Mozilla Firefox doesn’t support HPKP yet and you’ll get error entries in the console log regarding an invalid Public-Key-Pins header. This should get fixed in time.


Getting IPv6 Support

Permalink - Posted on 2014-06-23 14:56

Given the impending [doom of IPv4](https://en.wikipedia.org/wiki/IPv4_address_exhaustion), I thought I’d try and setup my site to be accessible over IPv6. Thankfully my webhost has dual-stack connectivity in their datacenter. They also assign IPv6 addresses for free, in fact they gave me 65,537 addresses.[^1] Getting `nginx` setup was trivially easy, I re-compiled the software adding the `--with-ipv6` flag, then added the line `listen [::]:80` to my vhost files (or indeed `listen [::]:443`). This was in addition to the usual listen directive. Getting IPv6 configured correctly on the system took a little more working out. In the end I think I have simplified my configuration even for IPv4. I use Debian 7 which comes with the newer iproute2 package to manage network connections. With the stored settings in `/etc/network/interfaces`. This is mine: [bash] # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # This line makes sure the interface will be brought up during boot auto eth0 allow-hotplug eth0 # The primary network interface iface eth0 inet static address 85.17.141.27 netmask 255.255.255.0 gateway 85.17.141.254 # dns-* options are implemented by the resolvconf package, if installed dns-nameservers 85.17.150.123 85.17.96.69 85.17.150.123 62.212.64.122 dns-search localdomain # up commands up sysctl -w net.ipv6.conf.eth0.autoconf=0 up sysctl -w net.ipv6.conf.eth0.accept_ra=0 up ip addr add 85.17.141.33/24 dev eth0 up ip -6 addr add 2001:1af8:4100:a00e:4::1/64 dev eth0 up ip -6 ro add default via 2001:1af8:4100:a00e::1 dev eth0 This sets up the default IPv4 address and a default gateway. Then once the interfrace is brought up at boot time the `ip` command is invoked, which is a part of the iproute2 package, to add a second IPv4 address. Then add an IPv6 address and the default route to use when communicating over IPv6. You’ll notice I also use the `sysctl` command to change some system settings. These stop the system trying to assign itself an IPv6 address and to not listen to router advertisements. I think these were causing my IPv6 connection to drop. Now my system is setup as so: [bash] ➜ ~ ip addr show eth0 2: eth0: mtu 1500 qdisc mq state UP qlen 1000 link/ether d4:ae:52:c5:d2:1b brd ff:ff:ff:ff:ff:ff inet 85.17.141.27/24 brd 85.17.141.255 scope global eth0 inet 85.17.141.33/24 scope global secondary eth0 inet6 2001:1af8:4100:a00e:4::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::d6ae:52ff:fec5:d21b/64 scope link valid_lft forever preferred_lft forever and [bash] ➜ ~ ip -6 ro 2001:1af8:4100:a00e::/64 dev eth0 proto kernel metric 256 fe80::/64 dev eth0 proto kernel metric 256 default via 2001:1af8:4100:a00e::1 dev eth0 metric 1024 Even though I don’t have IPv6 at home yet, my site should be [connectible over IPv6](http://ready.chair6.net/?url=https://jonnybarnes.net/). [^1]: I was given the IP addresses `::0000` to `::FFFF`, that’s 216 addresses. *[IP]: Internet Protocol


The micropub API and security

Permalink - Posted on 2014-02-16 15:56

I’ve talked before about the [IndieWeb](https://jonnybarnes.net/blog/2013/12/indieweb-and-posse). That it’s important to own your identity online. What is the point, however, if there is no social nature to all this? We need to interact with each other. Webmentions to the rescue. These allow one website to “ping” another. A sort of notification system. This has been extended by [snarfed](https://snarfed.org) to the major silos of social network with his excellent [bridgy](https://www.brid.gy) service. This is basically a shim that makes it look like the sites use mf2 + webmentions, and very nicely done to. I *still* need to implement this on my own site. I’m working on it. What people are also working on is something called the [micropub](http://indiewebcamp.com/micropub) API. This would allow one site or service to post to another. You could log into my site, and post a note to your site. This obviously involves authentication, which is a well discussed problem. The indieweb community, and [Aaron](http://aaronparecki.com/) in particular, have developed a service called [IndieAuth](https://indieauth.com/). This allows you to authenticate as yourself with your own domain, by linking bijectively with various silos. To summarise the process, when you log into my site with your domain, I go to your domain and look for an `authorisation endpoint`, either in an `HTTP Link` header, or in a `` element in the HTML. This endpoint is usually `https://indieauth.com/auth`. You authorise and get redirected back to my site, along with an auth code being sent over as well. I then look for a `token endpoint`, again on your site, and make a request for a token and send the auth code I received. Your site verifies this code with the `authorisation endpoint` and then generates its own OAuth token which is sent back to my site. I can then use this token when making API requests to your `micropub endpoint`. Security is a concern here. The most important step I take is to store your token in an encrypted cookie. By not storing the token in my webapp, if my site becomes compromised, then your token isnt’t automatically compromised as well. The other talking point regarding security is the revocation of tokens. This isn’t an issue for micropub clients. This is an issue for our own sites, the micropub endpoints. We need a way of managing the OAuth tokens we have generated, so we can see and control which micropub clients we’ve authorised. I wonder if this is something that can be incorporated into IndieAuth? Once authorisation has occured, the endpoint could request that IndieAuth generates an OAuth token, and that token gets sent back to the client. Then when the micropub client makes an API request, the endpoint checks the OAuth token with IndieAuth. Then we could see all our “active” tokens on IndieAuth and revoke those we no longer wish to be active. Further consideration would be needed as to how to implement this. Particularly details like which roles a token is valid for, or whether it has an expiration date. How would this information be associated with a token? It could be simply encoded into the token itself, this would probably be the easiest solution to initially implement. It’s how I generate the tokens on my site at the moment. Maybe [Aaron](http://aaronparecki.com/) could chime in.


IndieWeb and POSSE

Permalink - Posted on 2013-12-19 12:24

I’m trying to adhere to the IndieWeb principals, as my homepage states. The first step was to get the ability to create notes, or micro-blogging, and then syndicate these to other silos. The most popular one by far, and the one I interact with most actually, is Twitter. So that’s what I’m looking to achieve initially, and then I can integrate support for other silos. Currently things are going well. My code is capable of syndicating notes to Twitter with a permashortcitaction. If the note is too long it will ellide at the appropriate word boundary and then add a permalink to the note. Further I can specify the URL of a particular tweet and the syndicated tweet will then be a reply to the original tweet. Allowing for threading on twitter. This is done by combining two pieces of software. The main code that runs my site. This is where the actual interaction with Twitter occurs. The preparation work is done in my [POSSE library](https://github.com/jonnybarnes/posse). This is what creates the correctly formatted tweet and works out the reply to status id. I don’t think it’s ready for other people to use yet though. And there are still some features to be added.


Embedding Google Maps

Permalink - Posted on 2013-10-30 14:30

When you want to embed a map of a location on a webpage the first place to go is [Google Maps](https://www.google.co.uk/maps). This process is slightly complicated for me as I am using the [Maps Preview](https://www.google.co.uk/maps/preview) which removes the default sharing options. Maybe this is a plan by Google to move all developers onto the Google Maps JS API. Though how that benefits normal users who don’t know what Javascript is I don’t know. The original method however was to generate an <iframe> HTML block to put in your site. This was to me essentially undecipherable. All the options were cryptic URL parameters. Using a little Javascript is much simpler. You define all your options and then add your map to the relevant <div>. Here is an example:

See the Pen bfdLj by Jonny Barnes (@jonnybarnes) on CodePen


Goodbye DumbQuotes

Permalink - Posted on 2013-10-14 15:59

I’ve decided to stop using my [DumbQuotes](https://github.com/jonnybarnes/dumbquotes) library on my site. I found that there were too many issues. Primarily with raw HTML and code-blocks. The straight-quotes in these sections all needed escaping so my library didn’t mangle anything. So I am now manually typing in curly-quotes using the appropriate keyboard shortcut. This is much simpler to maintain and keeps my markdown clean. I have also slightly redesigned my site. I was using Skolar and Myriad Pro but found this a little clichéd. I am now using [Prenton](https://typekit.com/fonts/prenton) for titles and [Livory](https://typekit.com/fonts/livory) for my body text. I’m liking the look so far. For readability I’ve also increased the font-size slightly.


How should non-profits spend money

Permalink - Posted on 2013-09-25 14:45

Dan Pallotta gives an excellent [TED](http://www.ted.com/) talk about the differences in how the non-profit and for-profit sectors spend money. In summary, he suggests that our current attitude that charities must spend as little on overhead as possible is actually limiting the amount of good they can do. Who cares about the overhead, what matters is how much money actually gets spent on doing good. In order to grow that number then money will have to be spent. This shouldn’t be stigmatised like it currenly is.


HTTPS certificates with StartSSL, a guide by konklone

Permalink - Posted on 2013-09-24 10:48

I have no idea where Eric Mill got his pseudnym of konklone from but this is a great article he's written about using StartSSL certificates to secure your website. The great thing about [StartSSL](https://www.startssl.com/) is that their certificates are free. My previous SSL certificates I've used have cost money, albeit only in the region of about £10/year. Free is still free. As far as I can tell StartSSL will let you have as many free (domain/email validated) certificates. I have two registered so far. I can highly recomend StartSSL to anyone thinking of getting an SSL certificate.


One Second on the Internet

Permalink - Posted on 2013-08-12 12:10

It just baffles my mind on how sites like Facebook and Google deal with that much data. Its truly astonishing.


Oh Noes, iPad Sales Struggling!

Permalink - Posted on 2013-07-24 08:24

Not really, Apple still sold 14.6 million iPads! That’s a lot of iPads. Apple also sold a record Q2 high of 31.2 million iPhones. There is still the issue of Samsung’s supposed [dominance](http://bgr.com/2013/07/23/samsung-most-profitable-consumer-electronics-company-apple/) of the market. They sell a much wider variety of products than Apple. Of course thry’ll ultimately sell more devices.


IndieWeb and Short URLs

Permalink - Posted on 2013-07-11 17:55

Here I shall use the terms URI and URL interchangeably to mean the same thing. I appreciate there are subtle differences. The IndieWeb is a fantastic idea. The web itself is inherently open. No one owns it, no one directly controls it. However, if you aren’t careful what services you use on the web then it can effectively end up that way. We all use the web in a primarily social way these days, social networks if you will. The big three players on the social web are Facebook, Twitter, and Google with their Google+ service. They want you to spend as much of your time as possible on their services in order to maximise their advertising revenues. This doesn’t play nicely with the inherently open and interoperable foundations of the web. Foundations without which these big players wouldn’t exist. And thus the IndieWeb is born. A desire to own your social identity and share however little or much of that social presence with these big players you want. Which I think is absolutely right. Just have a look at what people are [saying](https://medium.com/writers-on-writing/336300490cbb) [about](https://medium.com/indieweb-thoughts/9d0e36524dbf) [Medium](https://medium.com/surveillance-state/19a5db211e47). But what of URL shortening services? Some people seem to think that having your very own short URL helps in this cause. Which I suppose it does, but I think only to a small degree. Why do we even need to shorten web addresses? The only situation I think it would be necessary is posting/syndicating to Twitter. Any other service has ample character space to post the full URL, or is sensible and uses annotations. On Twitter however, any link, regardless of how short it is, gets wrapped up in their t.co service. I therefore don’t currently see any compelling reason to run your own URL shortening service other than simply because you can. *[URL]: Universal Resource Link *[URI]: Universal Resouce Identifier


The “Failed” State

Permalink - Posted on 2013-07-03 20:59

A thorough examination of yet another way that the U.S. attempts to justify its foreign policy. Namely by claiming a sovereign state is *failing* and needs saving. >Luckily, we can pinpoint exactly where it all began – right down to the words on the page. The failed state was invented in late 1992 by Gerald Helman and Steven Ratner, two US state department employees, in an article in – you guessed it – Foreign Policy, suggestively entitled Saving failed states. With the end of the cold war, they argued, “a disturbing new phenomenon is emerging: the failed nation state, utterly incapable of sustaining itself as a member of the international community”. And with that, the beast was born.


Hoefler & Frere-Jones release their webfont service

Permalink - Posted on 2013-07-02 13:11

This looks like an awesome service. The only issue I have is the cost. Its $99+ for the subscription and then you have to license the fonts you want to use. The service does however make others look like amateurs.


James Gandolfini has died

Permalink - Posted on 2013-06-20 15:41

Well, if ever I have an excuse to watch [The Sopranos](https://en.wikipedia.org/wiki/The_Sopranos)...


The Science of Why We Don’t Believe Science

Permalink - Posted on 2013-06-20 13:58

A great article that I think hints on a lot of things about being human. That we have these feelings that sometimes conflict with a cold scientific view of the world. My favourite quote being: >Head-on attempts to persuade can sometimes trigger a backfire effect, where people not only fail to change their minds when confronted with the facts—they may hold their wrong views more tenaciously than ever.


Introducing Dumbquotes

Permalink - Posted on 2013-06-19 11:24

This is slightly re-inventing the wheel, but I have released a new package called [Dumbquotes](https://github.com/jonnybarnes/dumbquotes). The idea is to replace simple typographic techniques with their more correct forms. Such as replacing a ' with ‘ or ’. This also gave me the chance to try and write a package. So dealing with making sure it’s `psr-0` compliant and has associated unit tests to run with `phpunit`. The package will deal with apostrophes, quotes, dashes, and ellipses. There are certain issues. Ultimately this is designed to deal with plain text such as a markdown document. It does **not** work with HTML. Trying to parse HTML with regex will bring the [return of Cthulu](http://stackoverflow.com/a/1732454/12854). However once you deal with HTML directly things get a little complicated. Consider the following sentence that could appear in some HTML `

Mary said \"How did she do that?\"

`. We want to turn this into `

Mary said “How did she do that?”

`. This is complicated by the fact we can't just search for a string of text containing two double quotes like so, `/\"(.*?)\"/`. The sentence doesn't actually appear in the HTML DOM. We actually have three blocks of text * `Mary said \"How ` * `did` * `she do that?\"` To concatenate that into a single string, and then put the tags back in the right place seems a very difficult task. So I have decided to write the dumbquotes parser to be applied before the markdown transform is applied. *[HTML]: HyperText Markup Language *[DOM]: Document Object Model


Google buys Waze

Permalink - Posted on 2013-06-11 15:54

I’ve been using Waze for a while now and it’s generally a very good app. Being bought out by Google makes me uneasy given the track record Google has in sun-setting products.