What is a JSON feed? Learn more

JSON Feed Viewer

Browse through the showcased feeds, or enter a feed URL below.

Now supporting RSS and Atom feeds thanks to Andrew Chilton's feed2json.org service


CJ-Jackson Blog


My thoughts on gRPC

Permalink - Posted on 2020-05-29 18:36, modified on 2020-05-31 19:52

Recently I’ve been trying out gRPC on the first impression, it was really fun to work with, once I the hang of writing a .proto file and using the code generator protoc, all I had to do was implement the interface (the one with the Server suffix) which I find very easy to do with the IDE of my choice (GoLand), just of the case of pressing Alt+Enter and then implementing interface it’s generate the spoiler plates for you so you don’t have to write them yourself; once I got past that, and I got the microservices all up and running, all I had to do was set up the client and execute the method, I didn’t even have to think about serialization as protoc will take care of that for you which is very nice, all I had to do was write the .proto file.

I do have an example of gRPC at https://github.com/CJ-Jackson/customer-waiter-chef/tree/master/grpc , I just wanted to get a feel of writing microservices.

At work, I have worked with large JSON REST API, I had to write and as silly as it sounds an XML file to aid in deserialization of JSON, I am not going to lie, it’s just plain stupid and was not fun at all, a real pain to debug as well, here is an example.

<?xml version="1.0" encoding="UTF-8" ?>
    <class name="Data" exclusion-policy="ALL">
        <property name="leg" serialized-name="leg" type="integer" expose="true" />
        <property name="references" serialized-name="references" expose="true">

Then here another example of the Go counterpart to the above.

type Data struct {
	Leg        int64    `json:"leg"`
	References []string `json:"references"`

And the same for .proto

message Data {

    int64 leg = 1;

    repeated string references = 2;


As you can see it’s not as painful as writing it in XML. But with gRPC, I don’t even have to think about serialization as I mentioned earlier. You might want to check out quicktype.io it converts JSON into gorgeous, typesafe code in any language, so you don’t have to do things the hard way. (They don’t support PHP at the time of this writing.)

One could argue, I should have used PHP’s json_encode and json_decode, no that terrible with large data structures, but that will be another story to tell.

I can see myself using gRPC for communication between the back-office app server and individual CLI applications to use with CRON, I believe that will be the better approach compared to Symfony’s one size fits all bin/console.

Panoramic View of Elizabeth Castle

Permalink - Posted on 2020-05-10 19:54, modified at 19:55

Elizabeth Castle, St Helier

Revamp Infrastructure

Permalink - Posted on 2020-04-13 16:11

I decided to revamp my existing infrastructure to something I can easily manage and set up backups and checkpoints. I was running the now discontinued Antergos which is based on top of Arch Linux which is a brilliant operating system, but it’s not very suitable for the enterprise because they often prefer matured application, with Arch Linux you always get given the latest application it’s too new for the enterprise especially for databases, so I thought it would be better to have a new infrastructure setup.

I decided to replace Antergos with Microsoft Hyper-V Server 2019 which is a cut-down version of Windows Server, just only with Hyper-V and nothing else, once I got that up and running managing the server was a walk in the park.

Hyper-V Manager

As you can see in the screenshot, I have set up four different virtual machines, ArchApp obviously runs on Arch Linux and serves as the user-facing production application server and that where my the code base for this website is hosted, ArchPortal is the backdoor is for the administrator (that me) to get in behind the firewall from outside the premises, so the admin can manage the other server on the network and the last two UbuntuDev and UbuntuProd those are the data server, one for production and the other for development, they both on Ubuntu 18.04 LTS (long-term support) and they have Mongo, Postgres and Redis installed and are all locked down to a specific version and I’ll only upgrade them when I’m ready to do so.

I did try to use Docker & Podman, it’s kept breaking my development server when I tried to run a backup, it’s did run well on my production server, but I decided to not use them anymore as I find them very difficult to monitor and probably won’t actually get used by the enterprise especially docker. The enterprise just prefers something that is very to monitor and does not break down too easily.

I was able to run a backup on both UbuntuProd and UbuntuDev with any issue, as Mongo and Postgres are running directly on the virtual machine, running the backup was easy, all I had to do was create two scripts one on the server and one on the client.

Server-Side Script

cd $( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null && pwd )
sudo su - postgres -c "pg_dumpall" > postgres.sql
tar -zcvf ../backup-$(date '+%s').tar.gz ./
rm -rf dump
rm postgres.sql

Client-Side Script

cd "$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null && pwd )"
ssh serveradmin@ubuntuprod.lan "~/backup/script"
scp "serveradmin@ubuntuprod.lan:~/backup-*.tar.gz" .
ssh serveradmin@ubuntuprod.lan "rm -rf ~/backup-*.tar.gz"

It’s just backup everything, everything in Postgres and Mongo including the files stored in GridFs, just like that. 🙂

Experiment with Microservice and Message Bus (NSQ)

Permalink - Posted on 2019-08-28 17:08, modified on 2020-04-06 18:33

Because I didn’t have much practical experience with microservices and I never had the opportunity to do it at the company I currently work for, I thought I do a little experiment that involves the use of microservices and a message bus.

Link to source code

The waiter and the chef are microservices, the customer is just the CLI that talks to the waiter to make an order and then the waiter send the order to chef using the message bus (NSQ), it’s pretty simple and basic really, all it’s does is that it’s send “Pepperoni Pizza” over the bus. But I only needed to prove to myself that I can build microservices, I believe I have succeeded in that, YAY. 😄

What I like about NSQ, compared TCP/IP I don’t have to manually set up a listener and manage the buffer, I can do it but it’s a little bit tedious; so instead I easily set up a Publisher the one that sends the information and the Consumer the one that receives the information.

Just a random update

Permalink - Posted on 2019-08-10 16:29, modified at 16:33

It’s been a little while since I made my last blog post, well I have been a little busy lately with work, working out at the gym, learning to play the guitar, learning a bit of Japanese (trying to master Hiragana ひらがな and Katakana カタカナ is a little bit tricky, hopefully, I get there), playing a couple of video games, mainly Crash Team Racing Nitro-Fueled and Super Mario Maker 2.

I also took the time to learn Rust, I enjoyed it, it’s just the IDE I’m working with is struggling to work with external libraries, so I’m going to stay off Rust until that is fixed, it’s just difficult for me to stay productive without auto-complete, I just can’t keep looking at the document there and back, it’s will burn me out and that no good and I need to work fast. 😄

I'll take time to learn other programming languages, I just don’t want to be that person who uses JavaScript for absolutely everything it’s just unrealistic. If I was developing a video game, I would have used C++, it’s a big language with no garbage collection, I will learn it when I get the time and overcome the fear, but hopefully, it will be fun. I will also take the time to learn Ruby.

Deploying Docker Git Containers Remotly

Permalink - Posted on 2019-04-14 18:14, modified on 2019-07-22 12:58

If I were to deploy a docker git containers and it’s requires no credential, I could use the following command

$ docker build -t example/image https://github.com/docker/rootfs.git

But what if it does require credentials and I wanted to use ssh public key authentication, the thing is the docker daemon might not have access to the private key used to log into ssh, but there is a solution one could use the git command to create the archive (or tallball) and then pass it to docker, for example.

$ git archive --format tar.gz --output /tmp/example.tar.gz --remote git@github.com:docker/rootfs.git master

$ docker build -t example/image - < /tmp/example.tar.gz

If I wanted to deploy to a remote server, I could make use of scp (to copy) and ssh (to build), for example.

$ git archive --format tar.gz --output /tmp/example.tar.gz --remote git@github.com:docker/rootfs.git master

$ scp /tmp/example.tar.gz user@remoteserver.lan:/tmp/example.tar.gz

$ ssh user@remoteserver.lan "docker build -t example/image - < /tmp/example.tar.gz"

Update: This command will not work with podman, you have to extract the tarball.

I don’t need to do port forward or do anything messy, like running ssh in the background, which I have to close when I’m done with it, I just rather not do it that way. Using port forwarding to upload a tarball, honestly, I find that clumsy. I prefer clean simple and elegant solutions to a complex problem.

Dealing with Dynamic Ip Address

Permalink - Posted on 2019-03-24 19:01, modified at 19:04

For dealing with dynamic ip addresses, the most elegant solution I could find, is to place the ip address of the network into a simple file using a shell script and sync the folder across different machine that you trust using file sync software like Syncthing or Relio Sync. Here an example of a script:

cd "$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null && pwd )"
IP=$(dig -4 +short myip.opendns.com @resolver1.opendns.com)
if [ "$IP" != "$(cat IpAddress/Server)" ]; then
        echo $IP > IpAddress/Server
        # Tips: you could do some fancy curl stuff here, e.g. cloudflare API ;)

And setup a cron to run the script about every 15 minutes.

Yes, one could say you can use DynDNS but there is a few disadvantages to that approach, for example, you can’t control where you’re sourcing the IP address from, you’ll end up distributing the IP address globally which may not be desirable and you’re handing over control to a third party.

How would I use the file with SSH, that easy I show you an example.

$ ssh -o 'HostKeyAlias myhost' username@$(cat ~/IpAddress/Server)

It’s really that simple, just make sure you add the alias to ~/.ssh/known_hosts and you’re done 🙂

On Swapping Mongo Driver

Permalink - Posted on 2019-03-10 17:13, modified on 2019-03-24 19:04

I recently swap Mongo DB driver from a third party to the official driver, the process mostly went smoothly because I was well disciplined in writing high quality code and sticking to good practice otherwise it would of took me a lot longer to complete.

I did have a few issues along the way.

Refactoring Gridfs Collections

The third party Mongo DB drivers somewhat conforms to Gridfs specification, the third party allows you to specify the collection for files and chunks, which is nice but the official driver does not allow you to specify the collection instead you have to specify the bucket name and the default value is ‘fs’ and I left it like that.

The naming convention for the collection are ‘bucketName.files’ and ‘bucketName.chunks’, as I mentioned earlier I left it at the default so I had to rename the collections to ‘fs.files’ and ‘fs.chunks’ when I did that live I had to deploy immediately so there is little downtime.

I also had a few data type mismatch so I had to write a script and execute it’s manually in Robo 3T, replacing NumberInt (int32) to NumberLong (int64) and that fixed the mismatch.

db.getCollection('fs.files').find({}).forEach(function(x) {
        x.chunkSize = new NumberLong(x.chunkSize);
        x.length = new NumberLong(x.length);

The third party allowed access to the metadata, but the official did not, so I had to create a clone of ‘fs.files’ and called it ‘filesMeta’ so can still access and update the metaData; also I kept ID 1:1 with each other.

Data type mismatch with the key of the map

The third party allows you to use any data type as the key, the official only allows string; so I had to change the data type of the key from int to string and problem solved. It’s not too bad after all I did attached a method to the map, all I had to do is convert int to string in that method. I’m using Golang, it’s strongly typed, I had to do explicit conversion, but I don’t mind, I like clarity, clarity is always good.

// Before
type PageCollection struct {
	Ref        string       `bson:"Ref"`
	Collection map[int]Page `bson:"Collection"`

func (p PageCollection) GetPage(pageNumber int) Page {
	page, found := p.Collection[pageNumber]

	return page

// After
type PageCollection struct {
	Ref        string          `bson:"Ref"`
	Collection map[string]Page `bson:"Collection"`

func (p PageCollection) GetPage(pageNumber int) Page {
	page, found := p.Collection[fmt.Sprint(pageNumber)]

	return page

The most complained about language is JavaScript and it’s loosely typed which is the opposite of strongly typed and trust me the data type mismatch take longer to figure out in JS than it’s does with Go!

Jersey Waves

Permalink - Posted on 2019-03-09 22:22, modified on 2019-06-02 15:05

I could stand there and watch the wave all day.

My own take on GVM

Permalink - Posted on 2019-03-01 14:21, modified on 2019-03-09 11:48

This is the script I used to manage Go’s SDK, it’s built on top of the official Google go way. I run it inside Windows Subsystem for Linux (WSL), it’s manages both WSL and Windows itself in one call inside WSL.


go get golang.org/dl/$1
GOOS=windows go build -o /d/go/bin/$1.exe golang.org/dl/$1
$1 download
$1 version
$1.exe download
$1.exe version

Usage example ./gvm go1.9

I also written an uninstall counterpart.


cd $( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null && pwd )
rm -rf sdk/$1
rm go/bin/$1
rm -rf /c/Users/ChristopherJohn/sdk/$1
rm /d/go/bin/$1.exe

Usage example ./ugvm go1.9

Note: you may to need to adjust the code to get it to work on your system. It’s can also work on pure Unix style systems, just get rid of anything Windows related .exe in the script

I could of used Moovweb’s GVM, but the complexity of the system scares me, honestly I value simple yet brilliant things much better. 🙂