knowledge ๐Ÿ“š

Build Status

Welcome to knowledge, a book by Joel Jucรก. If you're reading the online version, use the menu on the left side to browse the content.


This book contains things I learned during my life, either while studying or living. Most of its contents are not properly organized. The reading experience is not linear, so feel free to jump directly to sections you feel more interested in.


I always wanted to write a book with things I've been learning during my life, but it just never happened, simply because I generally don't have time and willingness to do it at the same time. ๐Ÿ˜‚ But I still want to share. So, I'll be just writing down things, thoughts, techniques, etc., without too much effort on organization.

An unstructured version that gets published would be better than a well formatted one that never goes public.



technology ๐Ÿ’ป

Technology is revolutionizing the human life. Today, almost everything we do is now related to, or uses technology somehow. We communicate, work, chill, cook, date, fall in love, cheat, overcome, explore, learn, evolve, etc., all with technology. Since this thing has created such deep roots in our species, powering us to do things we never imagined before, it would be wise to think about it in order to, at least, try to understand where we currently are and where we are heading to, now as the coolest kids who ever lived on Earth.


asdf might fail to install some project versions with the following error:

Authenticity of checksum file can not be assured! Please be sure to check the README of asdf-nodejs in case you did not yet bootstrap trust. If you already did that then that is the point to become SUSPICIOUS! There must be a reason why this is failing. If you are installing an older NodeJS version you might need to import OpenPGP keys of previous release managers. Exiting.

It fails to verify the authenticity of packages' signatures. A simple fix is to import some keys by executing a command provided by asdf itself:


"Recent versions not showing up. WTF?"

The asdf work alongside plugins that declares installers and manages them separatedly. So, in order to see the latest version of the respective binaries (node, ruby, etc.) you must update their respective plugins.

I ran on this problem today: I was trying to install Ruby 3.0.0 but it wasn't showing up as option when I ran asdf list all ruby. The solution? Update the respective plugin:

asdf plugin update ruby

You can also update them all for greater good:

asdf plugin update -all

ci/cd โš™๏ธ

Tips & tricks out of wisdom acquired from working on CI/CD.

gitlab ๐ŸฆŠ

Use file(s) hash(es) as cache keys

After spending a huge amount of time on it, I figured it's actually embedded into GitLab CI itself. In order to use file hashes, add them to cache:key:files:

# .gitlab-ci.yml

# Eg.: a global cache setup
      - yarn.lock

Proper documentation available at

๐ŸŽ macos

Apple and/or macOS related wisdom.

Rebuilding Spotlight's cache

Whenever you find yourself struggling with Spotlight results being affected by past typos and having undesired first results for your typings, you can just reset the Spotlight cache.

To do it, go to System Preferences > Spotlight > Privacy, then add your disks to the search prevention list. It will force Spotlight to clear the cache of indexed items like docs, apps, etc. :-) After doing so, just remove the disks from the same list so the Spotlight's cache index can be rebuilt.

elixir ๐Ÿ’œ

erlang โ“”

My adventures on this amazing distributed computing platform.

Erlang on macOS through asdf

Installing Erlang through asdf is probably the easiest way to get it up and running on macOS. You'll need OpenSSL to run the installation, and the easiest way to get it is through Homebrew:

brew install openssl

Homebrew's OpenSSL formula is keg-only, and it means that even after installed it won't be globally available in your system. There's an useful note about it on asdf-erlang's, but I ended up with a command that would use brew to get the path of the OpenSSL installation:

# first, export this Kerl-specific variable with the following content
export KERL_CONFIGURE_OPTIONS="--without-javac --with-ssl=$(brew --prefix openssl@1.1)"

# then, install the Erlang version you want (e.g.: 22.1.5)
asdf install erlang 22.1.5

These options will disable Java-related features and point to the correct OpenSSL paths.

You'll find more information about setting up Erlang in your machine in the Setup chapter of the Adopting Erlang book (a must-read if you're adopting Erlang just now).

ffmpeg ๐Ÿ“ผ

FFmpeg is a cross-platform tool to record, convert and stream audio and video. It's the de-facto standard for everything related to video processing, etc. - and I use it to perform basic video editing from the command line (yep, it's kind of using a bazooka to kill a single cockroach - but it's practical and easy to use).


To convert video formats, run:

ffmpeg -i movie.mp4 movie.avi

To cut a movie from the moment 1min 23s up to the moment 23min 45s, run:

ffmpeg -ss 01:23 -to 23:45 -i ./movie.mp4 -c copy out.mp4


The ffmpeg command, at its most basic format, accepts an input file (with the parameter -i) and an output file (given as the last argument):

ffmpeg -i movie.mp4 movie.avi

That's all it needs to convert a MP4 movie to the AVI format. See the file extensions? ffmpeg understands that it's a convertion operation by relying on the file extensions you're providing. Pretty clever, huh?!

Well, the things I do the most is: cutting a video from moment X to moment Y. I do it mostly to edit the recordings of the meetups I ran on Elug CE. With ffmpeg you can do it by using the parameters -ss <position>. It must be used before the parameter -i, so it affects the decoding of the input file. So, when using ffmpeg to cut videos, the order of these parameters does affect how ffmpeg works.

The value <position> must be a time duration specification

Here's an example of how

ffmpeg -ss 01:23 -t 23:45 -i ./movie.mp4 -c copy out.mp4

Time duration specification

There are two syntaxes for expression time durations:


HH expresses the number of hours, MM the number of minutes for a maximum of 2 digits, and SS the number of seconds for a maximum of 2 digits. The m at the end expresses decimal value for SS.

So, if you got two numbers of two digits separated by a collon


S expresses the number of seconds, with the optional decimal part m.

In both expressions, the optional - indicates negative duration.


  • 55: 55 seconds
  • 0.2: 0.2 seconds
  • 200ms: 200 milliseconds (or 0.2s)
  • 200000us: 200000 microseconds, (or 0.2s)
  • 12:03:45: 12 hours, 03 minutes and 45 seconds
  • 23.189: 23.189 seconds

So, for a quick reference:

  • 12:34: 12 minutes and 34 seconds
  • 01:23.567: 1 minute, 23 seconds, and 567 milliseconds
  • 12:34:56: 12 hours, 34 minutes, and 56 seconds
  • 12:34:56.789: 12 hours, 34 minutes, 56 seconds, and 789 milliseconds

-ss position (input/output) When used as an input option (before "-i"), seeks in this input file to position. Note that in most formats it is not possible to seek exactly, so ffmpeg will seek to the closest seek point before position. When transcoding and -accurate_seek is enabled (the default), this extra segment between the seek point and position will be decoded and discarded. When doing stream copy or when -noaccurate_seek is used, it will be preserved.

When used as an output option (before an output url), decodes but discards input until the timestamps reach position.

position must be a time duration specification, see the Time duration section in the ffmpeg-utils(1) manual.

ffmpeg -ss 11:58 -t 12:30 -i ./elug-ce-meetup-5-eceba7abc25e241c41f43a898ea32dd49a9bf71b\ on\ 2020-11-04\ 23-20.mp4 -c copy out.mp4

position must be a time duration specification, see the Time duration section in the ffmpeg-utils(1) manual.

freebsd ๐Ÿ˜ˆ

I'm giving FreeBSD a real try during this pandemic thing. I'm excited by the fact FreeBSD is extremely secure and stable, thus being a great option for servers. Also, there's a jails thing - something like containers, that seems to be even powerful than LXC/Docker and I would like to explore its potential as a disposable desktop environment.

First experience: FreeBSD on GCP

I need some practical experience to learn things, so I launched a FreeBSD node (a shiny f1-micro one) on Google Cloud. Since it has a Always-Free Tier and I currently don't have things running on GCP, it will cost me nothing (yay!). I'll use it both to learn FreeBSD itself and to deploy some service to play with. I've been looking for an opportunity to deploy a Mastodon instance somewhere, just to play with - and it seemed like a great opportunity!

Creating the machine required me to find a FreeBSD option on GPC's Marketplace. Luckily, it's kind of an official one, made by the FreeBSD Release Engineering team. I'm glad there's an official/supported option available, I wouldn't consider building an image myself.

The next step now is to SSH into it. GCP provides a way to launch a browser window with a shell on it. I used this one to send my public key to ~/.ssh/authorized_keys, but even still I can't SSH into it. No luck with it.

The solution ended up using the Google Cloud SDK's command-line gcloud (available in Homebrew as a cask), which you can install with the following command (macOS-specific):

brew cask install google-cloud-sdk

Then, you would run the following command to connect to the GCP machine:

gcloud compute ssh "<your-username>@<your-machine-name>" --project="<your-project>"

Welcome to FreeBSD!

Release Notes, Errata:
Security Advisories:
FreeBSD Handbook:
Questions List:
FreeBSD Forums:

Documents installed with the system are in the /usr/local/share/doc/freebsd/
directory, or can be installed later with:  pkg install en-freebsd-doc
For other languages, replace "en" with a language code like de or fr.

Show the version of FreeBSD installed:  freebsd-version ; uname -a
Please include that output and any error messages when posting questions.
Introduction to manual pages:  man man
FreeBSD directory layout:      man hier

Edit /etc/motd to change this login announcement.
ZFS can display I/O statistics for a given pool using the iostat subcommand.
By default, it will display one line of current activity.  To display stats
every 5 seconds run the following command (cancel with CTRL+C):

zpool iostat 5

To view individual disk activities, specify the -v parameter:

zpool iostat -v

Of course, both can be combined. For more options, see zpool(8).
                -- Benedict Reuschling <>

It seems that after this "Edit /etc/motd (...)" line the content is dynamic, and shows interesting tips about FreeBSD every time you log in. Cool.

Package management with pkg

The package management solution of FreeBSD is Ports - or FreeBSD Ports to be exact. The name seems weird - but yeah, if it works fine I won't care about a weirdo name. Its main command-line interface is the command pkg, which seems quite similar to Debian's APT (apt):

# Search for packages with `pkg search`
pkg search htop

# Install a package with `pkg install`
pkg install htop

# See a list of commands with `pkg help`
pkg help

I thought it would be harder to learn Ports, but since it looks so similar to Debian I felt at home.

Service management

FreeBSD uses rc to manage services. Again, it's seems to be similar do Debian's, so nothing that much new here. To manage a service you use the service command, followed by the service name (eg: sshd), and finally a subcommand (eg: start, stop, restart, etc.):

# Start sshd
services sshd start

# Stop sshd
services sshd stop

# Restart sshd
services sshd restart

It would make more sense if you had something like service <subcommand> <service-name> - but again, if it works I can live with it.

The FreeBSD Handbook

I'm learning all this stuff by reading the FreeBSD Handbook, the official project documentation. In fact, I'm learning a ton of interesting stuff about UNIX systems - like permission tricks like setuid and setgid, sticky bits

I highly recommend reading the FreeBSD Handbook if you're interested on learning more about UNIX and/or BSD! The FreeBSD documentation is really awesome and worth reading.

lfe ๐Ÿ‘ฝ

LFE (Lisp Flavoured Erlang; website, Wikipedia) is a Lisp dialect built on Erlang/BEAM.

oubiwann on webdev

I stumbled upon a discussion in the #web channel of LFE's Slack, where @oubiwann (Duncan McGreggor; GitHub, Twitter) was sharing lots of interesting advises on architectural matters for web softwares.

I just copy-pasted everything down below! Too much gold to loose it for Slack. ๐Ÿ˜…

oubiwann Apr 10th at 11:15 AM
I've done TONS of web development in LFE, both web front ends and REST APIs

oubiwann  2 days ago
I've done a tiny bit of web dev in Common Lisp and various Scheme dialects

oubiwann  2 days ago
but I've done a MASSIVE amount in Clojure (100s of 1000s of lines of code)

oubiwann  2 days ago
while the ecosystem for LFE is not anywhere as mature as what Clojure enjoys (luxuriates in, actually) I much prefer web dev in LFE

oubiwann  2 days ago
the language itself is so perfectly suited to it, supplemental libraries are not actually needed as much

oubiwann  2 days ago
side note: the best Clojure libraries I've used for web dev are not actually compojure and ring (though I did use the first initially, and all the ring libs for much of my time in the Clojure ecosystem), but rather what I used towards the end of my time at NASA:

oubiwann  2 days ago
any future LFE work I do around HTTP / REST / routes macros will be 100% influenced by that library

oubiwann  11:16 AM
not much recently, though
regardless, LFE is truly fabulous for this type of thing (thanks to its BEAM inheritance and beautiful syntax(lessness))
what I recommend to people who want to develop services for eventual deployment is start with an OTP release right away -- don't wait!
I avoided that for years
and shouldn't have
pretty much everything I build now starts with rebar3 lfe new-release myapp

oubiwann Apr 10th at 11:20 AM
also: separate these things cleanly, in different modules:
1. routing
2. request transformations
3. business logic
4. database access (even if you start with something in-memory)
5. generation of responses

oubiwann  2 days ago
this has also been critical in my many management consulting efforts: walking into teams that were on the verge of technical and/or morale collapse, I always saw that these principles had not been followed

oubiwann  2 days ago
when mgmt or product introduced radical new changes, the whole codebase had to be touched, instead of just changing the bits of code that implemented the product-specific bits

oubiwann  2 days ago
morale always radically changed a) after I worked with them to refactor to proper separation of concerns, and b) upon receiving the next request from mgmt to make radical changes ....

oubiwann  2 days ago
they never can believe their eyes when these refactors help them make changes in minutes vs their previous months

oubiwann  2 days ago
back to the bullet points:
1. this is very simple, format and approach depends almost entirely upon the selected framework (if you have to roll your own, of course this will be much more involved!)
2. this should contain no HTTP response-related code (or any of the other things mentioned in the list); this is all about transforming all required data sent in the request (params, query strings, POSTed body, URL paths, etc.) into an application representation (e.g., one or more Erlang maps or LFE records) -- DON'T use the framework's (or the underlying HTTP lib's) data structures for this! Your app needs to be losely coupled to those (you need to be able to easily replace them with another, without having your application suffer from vendor lock-in!)
3. this is where all the crazy shit lives; don't let the insanity here impact any other part of your app -- no leaky abstractions!
4. I've found the best way to handle this one is do define an app-specific internal API for data access, and hide the underlying implementation details behind that (VERY useful when switching out backends / DBs)
5. this last one is often skipped -- don't fall into that trap! It's usually a small amount of code, but you want your responses, errors presented to consumers, etc., to be easy to test, maintain, and to get new features (all of which should happen without touching any other part of your project)

oubiwann  2 days ago
think carefully about error propogation (edited)

oubiwann  2 days ago
I'd seriously recommend creating a single data structure representing both "results" and "errors" and making sure this is passed and transformed appropriately from the first bullet to the last (edited)

oubiwann  2 days ago
this would, of course, be affected by how you handle errors inside the app and how much you present to the final consumer of the app/API

oubiwann  2 days ago
you want to protect yourself from revealing too much of your app's internal workings to a public consumer or to bad actors that might take advantage of these errors to exploit potential vulnerabilities hinted at by the errors (edited)

oubiwann  11:21 AM
intermixing one or more of those has been the cause of much difficult-to-read code and origin of too many bugs
(not to mention slow-to-ship releases) (edited)
the tendency to say "oh, this is just a simple thing; I'll keep things separate in this module" is strong ... and by the time the code has evolved to something not-so-simple, it's too late (and many hard-to-see issues have already snuck into the code) (edited)

oubiwann  11:37 AM
incidentally, coding web projects in such a way also leads to much improved project delivery times -- I credit the accompanying clarity of thought (and thus code) which lends itself well to quickly iterating on logical, functional portions of a project

oubiwann  11:51 AM
two very important (and quick) reads when thinking about shipping software using the BEAM:
don't let the names fool you! These are two of the best bits of principal engineer-level pieces of writing for software development and deployment; I'm constantly recommending them to non-Erlangers for reading

linux ๐Ÿง

Creating boot USB sticks/microcards on macOS

A real quick-and-dirty way.

#1: Get an operating system image

You gotta get a *.img file somehow. Most distributions distribute them as compacted versions (tar, gzip, xz, etc.).

If all you've got is a *.iso file, convert it to *.img using hdiutil:

hdiutil convert -format UDRW -o /output.img /path/to/your/file.iso

#2: Prepare your removable

Format your removable as FAT32.

#3: Now, find the physical address of your removable

Plug your removable into your macOS system and run:

diskutil list

You'll see something like this:

/dev/disk0 (internal, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *100.0 GB   disk0
   1:                        EFI EFI                      79.0 MB   disk0s1
   2:          Apple_CoreStorage HD                       99.0 GB   disk0s2
   3:                 Apple_Boot Recovery HD             650.0 MB   disk0s3

/dev/disk1 (internal, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:                  Apple_HFS HD                      +98.0 GB   disk1
                                 Logical Volume on disk0s2

/dev/disk2 (external, physical):
  #:                       TYPE NAME                    SIZE       IDENTIFIER
  0:     FDisk_partition_scheme                        *2.0 GB     disk2
  1:                 DOS_FAT_32 SD                      2.0 GB     disk2s1

Now figure out which one is the physical address of your removable disk.

TL;DR: diskutil list lists all your disks, both physical and virtual. You'll have to figure out which physical disk is your removable, but you can generally check it through the storage capacities (in the example, it's a microsd capable of 2GB storage).

#4: Copy the raw data to your disk

After identifying the physical address of your removable, run:

diskutil unmountDisk <your removable disk address>

According to the example above, it would be:

# diskutil unmountDisk /dev/disk2

Then use dd to copy the contents your ``*.img` file into your disk:

sudo dd if=path/to/your/file.img if=<your-removable-disk> bs=1ms

Pro tip: Use GNU dd to get progress reporting

The dd binary that ships in macOS does not report progress during its operations, but the GNU version of dd does. In macOS, you can install it through the Homebrew package coreutils (brew install coreutils). The GNU dd will then be available in your $PATH as gdd (the coreutils package prefixes its binaries with a g - god knows why).

With GNU dd in place, you can run it just like you would do in dd, with an additional option, status=progress:

gdd if=./your-image.img of=/dev/disk2 status=progress bs=8388608

Note: the parameter bs of GNU dd seems to work only with a quantity of bytes and does not understand magnitude suffixes (eg: K fo kylobyte, M for megabyte, etc.), so you must inform a number with no suffixes. It defaults to 512, but you must inform your own. Since 512 is very small, you can use 1024 for read/write 1KB at a time, 2048 for 2KB, etc. - I've been using 8388608 (8MB) and it works well - though writing to a microsd imposes its own writing speeds.

Additional resources:


javascript ๐Ÿ’›

My [former] prefered language, with whom I have a love-hate relationship.

jest ๐Ÿƒ

My new preferred testing framework (2020 edition).

Mock clearing/resetting/restoring WTF

There are three options to undo things in Jest mocks:

  • m.mockClear(): remove all mock data (eg: m.mock.calls and m.mock.instances)
  • m.mockReset(): clear the mock (m.mockClear()) and removes mocked return values and implementations
  • m.mockRestore(): reset the mock (m.mockReset()) and restore the original (non-mocked) implementation

PS: the m variable on the examples below is a mock function:

const m = jest.fn(() => true);

mocha โ˜•๏ธ

Not the fastest unit testing framework but ยฏ\_(ใƒ„)_/ยฏ.

nyan reporter

Add --reporter=nyan to your test script on package.json and your tests will be the most nyaned awesome omg-thats-really-cool ones in the world:

$ mocha --reporter=nyan
 15  -_-_-_-_-_-_-_-__,------,
 0   -_-_-_-_-_-_-_-__|  /\_/\
 0   -_-_-_-_-_-_-_-_~|_( ^ .^)
     -_-_-_-_-_-_-_-_ ""  ""

  15 passing (77ms)

react โš›๏ธŽ

The best overcomplicated JavaScript framework out there.


React Hooks are this new way to handle state changes (which are now called side effects - or just effects) on React components. Class-based components are now dead and every React developer now thinks it hurts. God kills a kitten every time you start a new React component with export class...

So, Hooks. This is basically a weirdo replacement for lifecycle methods componentDidMount, componentDidUpdate, and componentWillUnmount:

import { useState } from "react";

const TogglePage = () => {
  const [isOn, setState] = useState(false);

  return (
      <button onClick={(ev) => setState(!isOn)}>Toggle</button>
      Current state: {isOn ? "On" : "Off"}

In the example above, we are using useState() to create a local state container with an initial value of false (Hooks doesn't dictate the type of data that goes in it - it's up to you). It returns a tuple with two items: the first one is the state itself, and the second one is a function that updates the state container's internal state.

The cool part is: whenever you update the state container, your functional component is re-rendered.


  • Hooks are executed on every fucking render - both on mount and updates, so it's pretty easy to create memory leaks with it. Be careful, motherfucker!
  • You can also return a function from it, which is then called when the component unmounts. These are called clean-up functions
  • You can pass an array as second argument of useState() to control when it should run, so whenever the hook is about to execute, it will compare the array with the last version of it. If it differs, your hook is executed again
    • You can have multiple items on this array - and whenever any of these items change, the hook (and the clean-up function) will execute again
    • If you pass an empty array the hook will be executed once, during mount - and if there's a clean-up function, it'll be executed once, during unmount (two empty arrays will never differ)

ruby ๐Ÿ’Ž

Ruby is awesome!

Command-line Twitter with t

Ruby has a powerful library called t ( Unfortunatelly, it seems abandoned by its author, and currently presents a dependency issue, requiring you to install the version 6.1.0 twitter library, instead of 6.2.0:

gem install t
gem install twitter -v 6.1.0
gem uninstall twitter -v 6.2.0


All things command-line wizardry.

Infinite loop

You'll often need to run a command over and over again for some reason (mine right now is: I have to check if a file exists). There are obviously better options - but doing things through command-line ninjutsu is always funnier. So, one-line infinite loops in Shell (both sh and Bash):

while true; do echo "Hello!" && sleep 1 done

This will run echo "Hello!" forever, "sleeping" for one second between each execution.

vs code ๐Ÿ“

Error "Cannot find runtime 'node' on PATH. Is 'node' installed?"

I had this error when I was trying to debug a Node.js project using VS Code's built-in debugger. It happens because VS Code is unable to find node in your $PATH. No news here, but I both my Bash and Zsh configured to add /usr/local/bin to my $PATH, but no luck so far.

I never managed to fix this bug - but I found a workaround so good that this solution becomes unnecessary: running VS Code as a subprocess of your current shell.

In practice, it's quite simple actually. Just run VS Code from your command-line instead of opening it directly. Go to your project directory and run code from it. Ex.:

cd path/your/project
code .

It opens VS Code with your project directory already loaded - but the important thing is that, since you ran it from your current shell session, it will inherint your current $PATH, and node (or in my case, nodemon) will be there. ๐Ÿ™‚

Extension management from the command line

It's possible to manage VS Code extensions through the command line, using the following commands:

code --extensions-dir <dir>
    Set the root path for extensions.
code --list-extensions
    List the installed extensions.
code --show-versions
    Show versions of installed extensions, when using --list-extension.
code --install-extension (<extension-id> | <extension-vsix-path>)
    Installs an extension.
code --uninstall-extension (<extension-id> | <extension-vsix-path>)
    Uninstalls an extension.
code --enable-proposed-api (<extension-id>)
    Enables proposed API features for extensions. Can receive one or more extension IDs to enable individually.


๐ŸŽผ music

I'm back on learning music, so I'll document here the stuff I've been learning.

Music Theory 101

It's important to know what are the basic building blocks of music, right?

Music Notes

The basic building block of music is a note. They are represented by English charaters A-G (A, B, C, D, E, F, and G). We know them by the solfรจge doโ€“reโ€“miโ€“faโ€“solโ€“laโ€“si, which translates to the characters C, D, E, F, G, A, and B (it starts on C because it's the equivalent to do, with D being the equivalent to re, and so on).

Notes can be modified by accidentals, creating intermediary notes between each one. The sharp sign (#) raises a note by a semitone (or half-step), while a flat (or bemolle) lowers note by a semitone. So, c C# would be a note that exists between C and D, while Eโ™ญ would be a note that exists between D and E. It's also possible to have a semitone represented by two possible accidentals - for instance, the C# one could also be represented by Dโ™ญ. Since the sharp sign raises one semitone and bemolle lowers one semitone, the two representations are equivalent.

Note: there are some special notes that does not have intermediary semitones. The B and C, and the E and F. These ones does not have semitones between them, so the semitones B#, Cโ™ญ, E# and Fโ™ญ does not exist.

The full list of notes and sharp semitones would be:

C C# D D# E F F# G G# A A# B

The equivalent full list of notes and flat semitones would be:

C Dโ™ญ D Eโ™ญ E F Gโ™ญ G Aโ™ญ A Bโ™ญ B

Scales and Tetrachords

In music, a scale is an ordered set of musical notes. One of the most common scales is probably the C Major: C, D, E, F, G, A and B. It has no sharps or flats, and can easily be played on piano by just hitting the white keys starting on C.

A tetrachord is a set of four notes separated by three intervals. Tetrachords are the basic building blocks of a major scale, and its notes are systematically spaced by the following rule: whole step, whole step, half step, with a whole step separating the tetrachord from the second one. This way you can draw any major scale by just starting from a given note and following the tetrachord spacing order.

Using the tetrachord schema, I could describe the logic behind the C major with:

C major (C D E F G A B): the first tetrachord goes one whole step (full note) from C to D, one whole step from D to E, and a half step (semitone only) from E to F (remember, there are no semitones between E and F). Then one whole step to start the second tetrachord at note G. Then, one whole step from G to A, one whole step from A to B, and one half step from B to C (again, there are no semitones between B and C).

Below are some other major scales I'm writing through the tetrachord system (I'm checking them on Wikipedia right after writing, and they are correct - yeah!):

  • D major: D E F# G A B C# D
  • E major: E F# G# A B C# D# E
  • F major: F G A A# C D E F
  • G major: G A B C D E F# G
  • A major: A B C# D E F# G# A
  • B major: B C# D# E F# G# A# B

project management โœ…

I've tried many different setups to manage projects: stickies, kanban, many different apps and services, and I failed horribly at getting things done. Now I have a quite simple setup:

  • I use Wunderlist as my main to-do app, with lists for personal and professional things;
  • โ€ŽI generally add all projects and tasks I must/should do/perform to my Wunderlist as lists, so it is common to find lists with dozens of items;
  • โ€ŽWunderlist has an interesting feature called smart lists that group items from multiple lists based on its due date. I use them to see tasks for today and the current week;
  • โ€ŽMy daily focus is always to get done all shit scheduled for today. Everything that can't be accomplished today will be postponed, either for tomorrow or later, so I can focus on what I can do today;
  • โ€ŽSince Wunderlist helps me to focus on my daily goals I have lists for sanzonal projects, like Dsafio, Exercism and HoraExtraJP alongside all their tasks (e.g.: issues to solve, pull requests to review, or events to organize);
  • โ€ŽIntegration with my calendar helps me visualize my daily schedule together with compromises (planned working hours, meetings, as well as personal and professional appointments).