What is a JSON feed? Learn more

JSON Feed Viewer

Browse through the showcased feeds, or enter a feed URL below.

Now supporting RSS and Atom feeds thanks to Andrew Chilton's feed2json.org service

CURRENT FEED

CJ-Jackson Blog

JSON


Experiment with Microservice and Message Bus (NSQ)

Permalink - Posted on 2019-08-28 17:08

Because I didn’t have much practical experience with microservices and I never had the opportunity to do it at the company I currently work for, I thought I do a little experiment that involves the use of microservices and a message bus.

Link to source code

The waiter and the chef are microservices, the customer is just the CLI that talks to the waiter to make an order and then the waiter send the order to chef using the message bus (NSQ), it’s pretty simple and basic really, all it’s does is that it’s send “Pepperoni Pizza” over the bus. But I only needed to prove to myself that I can build microservices, I believe I have succeeded in that, YAY. 😄

What I like about NSQ, compared TCP/IP I don’t have to manually set up a listener and manage the buffer, I can do it but it’s a little bit tedious; so instead I easily set up a Publisher the one that sends the information and the Consumer the one that receives the information.


Just a random update

Permalink - Posted on 2019-08-10 16:29, modified at 16:33

It’s been a little while since I made my last blog post, well I have been a little busy lately with work, working out at the gym, learning to play the guitar, learning a bit of Japanese (trying to master Hiragana ひらがな and Katakana カタカナ is a little bit tricky, hopefully, I get there), playing a couple of video games, mainly Crash Team Racing Nitro-Fueled and Super Mario Maker 2.

I also took the time to learn Rust, I enjoyed it, it’s just the IDE I’m working with is struggling to work with external libraries, so I’m going to stay off Rust until that is fixed, it’s just difficult for me to stay productive without auto-complete, I just can’t keep looking at the document there and back, it’s will burn me out and that no good and I need to work fast. 😄

I'll take time to learn other programming languages, I just don’t want to be that person who uses JavaScript for absolutely everything it’s just unrealistic. If I was developing a video game, I would have used C++, it’s a big language with no garbage collection, I will learn it when I get the time and overcome the fear, but hopefully, it will be fun. I will also take the time to learn Ruby.


Deploying Docker Git Containers Remotly

Permalink - Posted on 2019-04-14 18:14, modified on 2019-07-22 12:58

If I were to deploy a docker git containers and it’s requires no credential, I could use the following command

$ docker build -t example/image https://github.com/docker/rootfs.git

But what if it does require credentials and I wanted to use ssh public key authentication, the thing is the docker daemon might not have access to the private key used to log into ssh, but there is a solution one could use the git command to create the archive (or tallball) and then pass it to docker, for example.

$ git archive --format tar.gz --output /tmp/example.tar.gz --remote git@github.com:docker/rootfs.git master

$ docker build -t example/image - < /tmp/example.tar.gz

If I wanted to deploy to a remote server, I could make use of scp (to copy) and ssh (to build), for example.

$ git archive --format tar.gz --output /tmp/example.tar.gz --remote git@github.com:docker/rootfs.git master

$ scp /tmp/example.tar.gz user@remoteserver.lan:/tmp/example.tar.gz

$ ssh user@remoteserver.lan "docker build -t example/image - < /tmp/example.tar.gz"

Update: This command will not work with podman, you have to extract the tarball.

I don’t need to do port forward or do anything messy, like running ssh in the background, which I have to close when I’m done with it, I just rather not do it that way. Using port forwarding to upload a tarball, honestly, I find that clumsy. I prefer clean simple and elegant solutions to a complex problem.


Dealing with Dynamic Ip Address

Permalink - Posted on 2019-03-24 19:01, modified at 19:04

For dealing with dynamic ip addresses, the most elegant solution I could find, is to place the ip address of the network into a simple file using a shell script and sync the folder across different machine that you trust using file sync software like Syncthing or Relio Sync. Here an example of a script:

#!/bin/bash
cd "$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null && pwd )"
IP=$(dig -4 +short myip.opendns.com @resolver1.opendns.com)
if [ "$IP" != "$(cat IpAddress/Server)" ]; then
        echo $IP > IpAddress/Server
        # Tips: you could do some fancy curl stuff here, e.g. cloudflare API ;)
fi

And setup a cron to run the script about every 15 minutes.

Yes, one could say you can use DynDNS but there is a few disadvantages to that approach, for example, you can’t control where you’re sourcing the IP address from, you’ll end up distributing the IP address globally which may not be desirable and you’re handing over control to a third party.

How would I use the file with SSH, that easy I show you an example.

$ ssh -o 'HostKeyAlias myhost' username@$(cat ~/IpAddress/Server)

It’s really that simple, just make sure you add the alias to ~/.ssh/known_hosts and you’re done 🙂


On Swapping Mongo Driver

Permalink - Posted on 2019-03-10 17:13, modified on 2019-03-24 19:04

I recently swap Mongo DB driver from a third party to the official driver, the process mostly went smoothly because I was well disciplined in writing high quality code and sticking to good practice otherwise it would of took me a lot longer to complete.

I did have a few issues along the way.

Refactoring Gridfs Collections

The third party Mongo DB drivers somewhat conforms to Gridfs specification, the third party allows you to specify the collection for files and chunks, which is nice but the official driver does not allow you to specify the collection instead you have to specify the bucket name and the default value is ‘fs’ and I left it like that.

The naming convention for the collection are ‘bucketName.files’ and ‘bucketName.chunks’, as I mentioned earlier I left it at the default so I had to rename the collections to ‘fs.files’ and ‘fs.chunks’ when I did that live I had to deploy immediately so there is little downtime.

I also had a few data type mismatch so I had to write a script and execute it’s manually in Robo 3T, replacing NumberInt (int32) to NumberLong (int64) and that fixed the mismatch.

db.getCollection('fs.files').find({}).forEach(function(x) {
        x.chunkSize = new NumberLong(x.chunkSize);
        x.length = new NumberLong(x.length);
        db.getCollection('fs.files').save(x);
});

The third party allowed access to the metadata, but the official did not, so I had to create a clone of ‘fs.files’ and called it ‘filesMeta’ so can still access and update the metaData; also I kept ID 1:1 with each other.

Data type mismatch with the key of the map

The third party allows you to use any data type as the key, the official only allows string; so I had to change the data type of the key from int to string and problem solved. It’s not too bad after all I did attached a method to the map, all I had to do is convert int to string in that method. I’m using Golang, it’s strongly typed, I had to do explicit conversion, but I don’t mind, I like clarity, clarity is always good.

// Before
type PageCollection struct {
	Ref        string       `bson:"Ref"`
	Collection map[int]Page `bson:"Collection"`
}

func (p PageCollection) GetPage(pageNumber int) Page {
	page, found := p.Collection[pageNumber]
	checkIfFound(found)

	return page
}

// After
type PageCollection struct {
	Ref        string          `bson:"Ref"`
	Collection map[string]Page `bson:"Collection"`
}

func (p PageCollection) GetPage(pageNumber int) Page {
	page, found := p.Collection[fmt.Sprint(pageNumber)]
	checkIfFound(found)

	return page
}

The most complained about language is JavaScript and it’s loosely typed which is the opposite of strongly typed and trust me the data type mismatch take longer to figure out in JS than it’s does with Go!


Jersey Waves

Permalink - Posted on 2019-03-09 22:22, modified on 2019-06-02 15:05

I could stand there and watch the wave all day.


My own take on GVM

Permalink - Posted on 2019-03-01 14:21, modified on 2019-03-09 11:48

This is the script I used to manage Go’s SDK, it’s built on top of the official Google go way. I run it inside Windows Subsystem for Linux (WSL), it’s manages both WSL and Windows itself in one call inside WSL.

./gvm

#!/bin/bash
go get golang.org/dl/$1
GOOS=windows go build -o /d/go/bin/$1.exe golang.org/dl/$1
$1 download
$1 version
$1.exe download
$1.exe version

Usage example ./gvm go1.9

I also written an uninstall counterpart.

./ugvm

#!/bin/bash
cd $( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null && pwd )
rm -rf sdk/$1
rm go/bin/$1
rm -rf /c/Users/ChristopherJohn/sdk/$1
rm /d/go/bin/$1.exe

Usage example ./ugvm go1.9

Note: you may to need to adjust the code to get it to work on your system. It’s can also work on pure Unix style systems, just get rid of anything Windows related .exe in the script

I could of used Moovweb’s GVM, but the complexity of the system scares me, honestly I value simple yet brilliant things much better. 🙂


How do I route URL?

Permalink - Posted on 2019-02-17 14:14, modified at 14:29

That very easy, all I used was constant variable in it’s own package, just reference the constant variable inside the router or sprintf if you’re doing reverse URL. Here an example of the constant variable I used for URL routing.

package frontUrl

const (
	BlogIndex      = "/"
	BlogAjaxIndex  = "/blogajax"
	BlogIndexPage  = "/p/:page"
	BlogIndexPageF = "/p/%d"
	BlogEntry      = "/blogentry/:idSlug"
	BlogEntryF     = "/blogentry/%d-%s"
	BlogFeed       = "/feed.json"

	FontsFiles       = "/fonts/*filepath"
	JavascriptsFiles = "/javascripts/*filepath"
	StylesheetFiles  = "/stylesheets/*filepath"
	ImagesFiles      = "/images/*filepath"
	FaviconFiles     = "/favicon/*filepath"
	DynImg           = "/dyn-img/:name"
	DynImgF          = "/dyn-img/%s"
	Style            = "/generated.css"
)

An example of using a constant variable inside the router.

func (b controllerBootKit) entry() {
	b.router.GET(frontUrl.BlogEntry, func(writer http.ResponseWriter, request *http.Request, params httprouter.Params) {
		context := ctx.GetContext(request)
		b.controller.BlogEntry(context, b.idSlugValidator.GetIdSlugData(params))
	})
}

Yes, I got full control of what goes on between the router and the controller, it’s wonderful, it’s also the best place to apply user restrictions (permission, csrf, you name it), many framework tries to hide this control from you, it’s feels wrong to me.

How did I use it inside the template?

With the template you can’t reference the constant variable, but you can use functions to pass in the map, here an example.

func buildExampleTemplate() *template.Template {
	m := template.FuncMap{}

	urlMap := map[string]string{
		"indexPage": BlogIndexPage,
	}
	m["url"] = func() map[string]string { return urlMap }

	return template.Must(template.New("Test").Funcs(m).Parse(`{{ $url := url }}{{ printf $url.indexPage 1 }}`))
}

You can use global maps, but I wouldn’t recommend it, can cause side effects; it’s better to have a template with it’s own url map.

The constant variable can be used for html template or sql statement, but I would use a tool for that, embedder for example.


Why do I conform to good practice while programming?

Permalink - Posted on 2019-01-01 14:17

The reason I prefer to conform to good practice is because I want to be able to understand the source code that I have written six month ago, if I didn’t conform to good practice in programming than the source code I have written will be difficult to understand in six month time not only for me but other programmers as well, that the main point of conforming to good practice.

Happy New Year to Everyone 🥳


Merry Christmas and a Happy New Year

Permalink - Posted on 2018-12-25 09:51, modified at 09:50