type StringOrNumber = string | number
Here the StringOrNumber
type can be either string
or number
as expected. The member types do not have to be primitive types and they can be custom types as well.
When dealing with union types, before we can apply operations or functions that are specific to one member type, we would need to figure out which member type we have and that’s called type narrowing.
There are many ways to narrow the type, but in this post I specifically want to cover the syntax for creating a custom function to do type narrowing. I found myself referring back to the “Using type predicates” section on the Narrowing page in Typescript documentation, so I figure it might be a good idea to make it slightly easier to find and potentially easier to remember by writing a post about it.
Let’s say we want to create a type narrowing function for the StringOrNumber
type above, it would look like this:
function isString(input: StringOrNumber): input is string {
return typeof input === 'string'
}
Here, input is string
is called a type predicate and it is the return type of the isString
function. In the function body, it needs to return a boolean value: true
means input
is the expected type and false
means it is not. The function’s logic can be as complex as it needs to be. As long as it returns a boolean value in the end, it’ll work fine as a type narrowing function or say a type guard.
In this example, returning true
means input
is a string, otherwise it’s a number, considering it can only be either a string or a number.
We can use it like the following:
// assuming we have a function that returns a string or a number
let a: StringOrNumber = getStringOrNumber()
if (isString(a)) {
a.length
} else {
a + 3
}
Of course, this is just a contrived example to demonstrate the correct syntax. In a real world application, we would most likely just do typeof input === 'string'
instead of creating a function to do it.
One more thing worth noting is that when we create a custom type guard, Typescript would treat us as adults and completely trust that it will do the right thing. For example, we can define a nonsensical type guard like this:
function isString(input: StringOrNumber): input is string {
return typeof input === 'number'
}
And Typescript would happily accept it and gives no compiler error or warning, but since it’s narrowing the type incorrectly, some runtime errors should be expected after using this type guard.
In this post, I briefly talked about what type narrowing is and then looked into how to define our own type guard. As usual, the hope is to help my future self and potentially be of service to other people who are starting their Typescript journey as well.
]]>When trying to understand macros, I use the slightly different mental model: a macro is like a template with a name that can optionally have parameters; when the name is used, it’s substituted by the template with “values” of the arguments injected.
Sorry if it didn’t make it better. I’ll try to explain what do I mean by that. But let’s first review how C macros work.
Before actually compiling a C program, the C compiler will use the C preprocessor to transform the program, which is also referred to as preprocessing. One of the things that happens in this preprocessing step is macro expansion.
In GNU’s online documentation for the C Preprocessor, a macro is defined as following:
A macro is a fragment of code which has been given a name. Whenever the name is used, it is replaced by the contents of the macro.
You can define a macro like this:
#define DOUBLE(x) (2 * x)
NOTE: by convention macro names in C use uppercase.
And then use it as below:
DOUBLE(5)
During preprocessing, the preprocessor will replace it with (2 * x)
. The compiler would just see (2 * x)
as if you wrote it in stead of DOUBLE(5)
in the first place.
So a macro in C allows us to define a fragment of code that can have parameters, and when it’s used the macro name would be replaced by the code fragment we defined with arguments interpolated.
It’s worth noting that since this happens before compilation, the program is still just a piece of text, so both arguments interpolation and macro expansion are just literal text substitution.
How does this relate to Elixir’s macros? Well macro expansion in Elixir is definitely not text substitution, but it’s still substitution, just happens at a higher level.
Most programming languages have the Abstract Syntax Tree (AST), which is a tree structure the compiler builds from the source code before turning it into either machine code or byte code.
In most languages, the AST is not exposed to us developers and we can get our code working without worrying about the AST or even knowing about it.
In the case of Elixir, the compiler give us access to the AST. This comes with great power and allows us to do many things that aren’t possible in other languages, creating macros among them.
You can get the AST for a piece of code by using quote
, for example:
iex(1)> quote do
...(1)> 1 + 2
...(1)> end
{:+, [context: Elixir, import: Kernel], [1, 2]}
This is probably the simplest form, but in general Elixir’s AST is represented as a three elements tuple. When the expression is more complex, its corresponding AST is usually deeply nested.
In Elixir, the AST is also known as quoted expressions.
For more details on working with quoted expressions, please refer to offical Quote and unquote guide.
One thing to remember about macros in Elixir is that they receive AST as arguments and return AST.
You can define a macro with defmacro
:
defmodule MyIf do
defmacro if(condition, do: action) do
quote do
case unquote(condition) do
x when x in [false, nil] -> nil
_ -> unquote(action)
end
end
end
end
Then you can use it by:
require MyIf
MyIf.if true, do: IO.puts("Hello world!")
# or
MyIf.if false, do: IO.puts("Will not print anything")
To understand how macros work in general, I often use the tree metaphor. The entire program is just a big tree containing data and expressions as nodes, some of which are macro usages. This is literally correct with Elixir, because the compiler do convert the program into a big abstract syntax tree in the form of a deeply nested three elements tuple.
Defining a macro is like creating a template in AST or say a sub-tree. If there are arguments, they can be injected to the sub-tree with unquote
.
During macro expansion, the Elixir compiler will replace each macro usage node with the AST sub-tree returned by its macro definition with the arguments injected. Because macros can be used inside another macro, macro expansion will happen repeatedly until there are no more macros.
If we compare Elixir macros with C macros, we can see that they both offer a way to substitute an expression with a template. The difference is that in C we define the template as text and replaced as text, whereas in Elixir, the template is defined as a piece of AST and the substitution happens at the AST level as well.
To figure out what a particular macro does, I usually think of it as copying what the macro definition has and pasting it where the macro is being used. For unquoted arguments, mentally replace them with the corresponding values passed in.
When the macro is more complex, this might not work, but it should still set one on the right path in understanding the macro.
Macros are hard and they can be daunting even for experienced developers.
In this short post, I tried to offer a different perspective in understanding macros in Elixir.
It would fantastic if this makes it slightly easier for someone to learn about macros.
If you’d like to learn more about Elixir macros, check out Saša Jurić’s great blog post series Understanding Elixir Macros and Chris Mccord’s book Metaprogramming Elixir.
]]>The Pi-hole® is a DNS sinkhole that protects your devices from unwanted content, without installing any client-side software.
Pi-hole is a great piece of software if you are interested in a bit more privacy and saving some bandwidth at the same time.
As its name suggests, it can obviously be installed on a Raspberry Pi, but apart from that it actually runs on other Linux hardware as well. As long as your device can run one of the Officially supported Operating Systems, you should be able to run Pi-hole on it.
While supporting other hardware definitely has its use cases, I consider Raspberry Pi to be the best choice for running Pi-hole in the home network, because of its low cost and very low power consumption.
I happen to have a Raspberry Pi Zero that’s been collecting dust since I bought it a few years ago, which is perfect for running Pi-hole.
Obviously we need to install an operating system on Raspberry Pi before we could do anything with it.
The recommended way of installing an operating system for Raspberry Pi is to use the Raspberry Pi Imager. You’ll need a computer with an micro SD card reader to install an OS image on your card.
I’m running it on macOS, but it supports Linux and Windows as well. Just make sure to download the correct version for your OS.
Click CHOOSE OS
to choose from a list of options. I’d recommended just go with Raspberry Pi OS (32-bit)
, which is the first option, unless you have a specific purpose for the Pi and you know what you are doing.
Then click CHOOSE STORAGE
to select the mounted micro SD card. Remember that the imager will erase the card first, so you’ll lost everything on the card. Make sure you’ve got backup for anything you still need on the card.
Lastly, click WRITE
to install the selected OS on your card.
For most of the OS options, the imager will download the OS image while writing it to the micro SD card. If you have a slow Internet connect, you can also download the OS image separately from here and then choose Use custom
for the OS option and select the downloaded .img
file.
There is also an “Advanced” menu in the imager, which you can open with Ctrl-Shift-X
. This menu allows you to perform tasks like enabling SSH and setting admin password. As described in the next section, we can do those with terminal commands too.
Since I find connecting a monitor, a keyboard and a mouse to the Raspberry Pi quite cumbersome, I’d like to setup a headless Raspberry Pi.
In order to achieve that, there are a couple more things to do before booting the Raspberry Pi: enable SSH access and allow auto WiFi connection.
On macOS, the SD card with Raspberry Pi OS image is usually mounted on /Volumes/boot
.
We can enable SSH by creating an empty file named ssh
in the root directory of the card:
cd /Volumes/boot
touch ssh
I’d also like the Pi to be able to connect to WiFi when it boots up.
In order to do that, create a text file named wpa_supplicant.conf
with the following content:
country=<your-country-code>
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
network={
ssid="<your-wifi-ssid>"
psk="<your-wifi-psk"
key_mgmt=WPA-PSK
}
Obviously, fill in the ISO 3166 alpha-2 country code of your country, your WiFi SSID and PSK.
Then copy the file to the root directory of the SD card:
cp wpa_supplicant.conf /Volumes/boot
Note: if you are using an older Raspberry Pi (like my Pi Zero), it might not support 5GHz networks
After the two steps above, we can put the SD card in the Raspberry Pi and boot it up.
Once it’s up and running, we should be able to connect to it via SSH:
ssh pi@<ip_for_raspberry_pi>
The default password is raspberry
.
Make sure to update the admin password with passed
after logging in.
When you run commands on the Raspberry Pi, you may see warnings like this
-bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
So you might want to fix the locale.
If you are comfortable with vi
or nano
, just edit /etc/locale.gen
and uncomment the line starting with en_US.UTF-8
.
Otherwise you can run the following command to do the same:
perl -pi -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/g' /etc/locale.gen
Then run the following:
locale-gen en_US.UTF-8
update-locale en_US.UTF-8
After that, the annoying warnings should be gone.
Check out Jared Wolff’s post for more details on locale.
Now the Raspberry Pi is in a reasonable state, we are ready to install Pi-hole.
The most convenient way to is to use the following command:
curl -sSL https://install.pi-hole.net | bash
The installation process is relatively straightforward. Just follow the prompts to get it done.
If you’d like to know what the installation script actually does, check out the source code basic-install.sh.
After Pi-hole is successfully installed, we still need to configure the router to use Pi-hole as the DNS server, which makes sure that all devices on the network will be protected automatically by Pi-hole.
If that’s not supported by your router or you only want certain devices to use Pi-hole, you can configure the DNS server on each device.
You might want to check out this comprehensive guide on the different options to configure Pi-hole as your DNS server.
Pi-hole comes with a default blocklist:
https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts
Which is well maintained and provides good protection without breaking normal functionality in most cases. For many this might be enough, but if you have other requirements, such as blocking adult content or targeted ads, you can add custom blocklists for improved blocking capabilities.
Check out The Best PiHole Blocklists (2021) for some blocklist options and more importantly, what to think about when choosing a blocklist.
You can add/remove blocklists under Group Management -> Adlists
in the Pi-hole admin interface. After making changes there, you’ll need to either run pihole -g
in terminal or go to Tools -> Update Gravity
and click the Update
button.
With the default blocklist from Pi-hole, the only thing that no longer works for me is Youtube history on iOS devices.
Adding a domain to the whitelist would solve that:
pihole -w s.youtube.com
Or if you prefer the admin interface, you can add a domain in Whitelist
.
If you find other sites or services not working properly after introducing Pi-hole, you might want to check out the Commonly Whitelisted Domains and potentially whitelist relevant domains.
Last but not least, keeping things up-to-date on the Raspberry Pi would be a good idea.
Since Raspberry Pi OS is based on Debian Linux, we can use apt
for that:
sudo apt update
sudo apt full-upgrade
Also we can remove packages that are no longer required with:
sudo apt autoremove
In this post, I walked through how to install Raspberry Pi OS, setup headless Raspberry Pi, install Pi-hole, manage blocklists and etc.
As many of my other posts, the main purpose is to serve as a reference for my future self. But if someone else finds it helpful too, I’d be very glad :slightly_smiling_face:
]]>True that it’s just the first release candidate, but we are still one step closer to the formal release.
Update (28 September 2021): Phoenix 1.6.0 was released three days ago on 25 September, so now we do have our formal release
Last week, I managed to upgrade my little side project Rubik’s Cube Algorithms Trainer to Phoenix 1.6 and I’ll share my journey in this post. For the most part, I was following the Phoenix 1.5.x to 1.6 upgrade instructions by Chris McCord.
If you prefer watching a video than reading, I also did a talk about it at Elixir Sydney September 2021 meetup and the recording is on Youtube.
First we need to update the dependencies in the mix.exs
file:
def deps do
[
{:phoenix, "~> 1.6.0"},
{:phoenix_html, "~> 3.0"},
{:phoenix_live_view, "~> 0.16.0"},
{:phoenix_live_dashboard, "~> 0.5"},
{:telemetry_metrics, "~> 0.6"},
{:telemetry_poller, "~> 0.5"},
...
]
end
Then run mix deps.get
to install the new dependencies.
Two thing to note here:
phoenix
, the override: true
option is important, because Phoenix 1.6 is still in RCHEEx
templates, add phoenix_live_view
even if you don’t actually use live viewNext step is to use esbuild for Javascript and CSS bundling. This is an optional step in upgrading to Phoenix 1.6, but it is what I’m all excited about, so it’s not optional for me :smirk:
Before jumping into replacing webpack with esbuild, it’s worth having a quick review of the existing Phoenix asset pipeline:
priv/static
assets/js
to priv/static/js
assets/css
to priv/static/css
assets/static
-> priv/static
Now with the new asset pipeline based on esbuild, all static assets are still served from priv/static
directory, so the first item stays the same.
With webpack gone, the other three obviously will change. For JS and CSS, esbuild will handle them; but we do need to deal with images and other assets separately.
Alright, let’s dive into how to make those changes.
First we need to remove webpack config and related node files:
$ cd assets
$ rm webpack.config.js package.json
package-lock.json .babelrc
$ rm -rf node_modules
If you use yarn, remove yarn.lock
instead of package-lock.json
.
Then add esbuild
as a dependency:
def deps do
[
...
{:esbuild, "~> 0.2", runtime: Mix.env() == :dev},
]
end
Next add configuration for esbuild in config/config.exs
:
# config/config.exs
config :esbuild,
version: "0.12.18",
default: [
args: ~w(js/app.js --bundle --target=es2016 --outdir=../priv/static/assets),
cd: Path.expand("../assets", __DIR__),
env: %{"NODE_PATH" => Path.expand("../deps", __DIR__)}
]
Note: here we are providing the relative path from
config
toassets
with the:cd
option, so that the esbuild command can be run in theassets
directory. Given that, if you have a umbrella app, the path should be something like../apps/your_web_app/assets
.
In your config/dev.exs
file, there should be a node watcher that uses webpack under the endpoint configuration. We want to replace that one with the esbuild watcher below:
# config/dev.exs
config :your_web_app, YourWebApp.Endpoint,
...,
watchers: [
esbuild: {Esbuild, :install_and_run, [:default, ~w(--sourcemap=inline --watch)]}
]
Note: this is for local development only.
Then we deal with images.
The following is not mentioned in the Phoenix 1.6 upgrade instructions, but I found José Valim’s recommendation in a reddit thread.
He recommends to move everything in assets/static
to priv/static
; stop ignoring priv/static
and commit it in version control; also ignore priv/static/assets
instead, since that’s where esbuild puts the compiled Javascript and CSS files.
assets.deploy
mix aliasThen we add a new mix alias for deployment:
defp aliases do
[
...,
"assets.deploy": ["esbuild default --minify", "phx.digest"]
]
end
As we can see, it has two parts:
priv/static/assets
phx.digest
task will add digests for all static assets priv/static
This is for deployment, so we only need to run it on build servers. For example when deploying to Heroku or the like, make sure it is run.
If you did run it locally, it would generate a whole bunch of digested assets. Since we no longer ignore priv/static
, they would show up when you run git status
for example and could be annoying.
We can remove them with the following command:
mix phx.digest.clean --all
As mentioned in the last section, esbuild puts compiled Javascript and CSS in priv/static/assets
directory, so we need to update the references to them in the layouts, usually in app.html.eex
or root.html.eex
:
# update
Routes.static_path(@conn, "/js/app.js")
Routes.static_path(@conn, "/css/app.css")
# to
Routes.static_path(@conn, "/assets/app.js")
Routes.static_path(@conn, "/assets/app.css")
Plug.Static
configurationLast step for the new asset pipeline is to update configuration for Plug.Static
.
We need to add the new assets
directory in the :only
option and also remove js
and css
from there since we no longer have them.
plug Plug.Static,
at: "/",
from: :my_app,
gzip: false,
only: ~w(assets fonts images favicon.ico robots.txt)
With the changes above, the new asset pipeline should be working, which means we are officially free of webpack and node in our Phoenix application :tada:
HEEx
templates (optional)With Phoenix 1.6 the leex templates are deprecated and there is a new HEEx
engine, which is being used in all the HTML files generated by phx.new
, phx.gen.html
etc.
It is HTML-aware and it enforces valid markup. It’s also more strict for the Elixir expressions inside tags.
In order to use it in an existing Phoenix project, make sure phoenix_live_view
is added as a dependency, because the HEEx
engine is part of phoenix_live_view
.
Then rename all the existing .html.eex
and .html.leex
templates to .html.heex
When Elixir expressions appear in the body of HTML tags, HEEx
templates use <%= ... %>
for interpolation just like EEx
templates.
So code in the following example stays the same:
<h2>Hello <%= @name %></h2>
<%= Enum.map(names, fn name -> %>
<li><%= name %></li>
<% end) %>
But when an Elixir expression is used inside a tag, as the attribute value for example:
<div id="<%= @id %>">
It needs to be updated to:
<div id={@id}>
Notice the EEx
interpolation is inside double quotes, the curly braces are not. So the Elixir expression has to serve as the whole attribute value, not part of it.
With EEx
, the Elixir expression can serve as part of the attribute value:
<a href="/prefix/<%= @item.text %>">
In situations like this, directly replacing it with a pair of curly braces won’t work.
# this doesn't work
<a href="/prefix/{@item.text}">
One way I could think of is to interpolate the original Elixir term in a string and then put that string in a pair of curly braces like the following:
<a href={"/prefix/#{@item.text}"}>
Now the resulting string is the Elixir expression and it serves as the whole attribute value, so this works.
After making those changes, the new HEEx
templates should be working.
There are changes to the deployment process as well and I’ll cover that for Gigalixir, since that’s where my app is deployed to. But if you use Heroku, the changes should be fairly similar.
Since we no longer use webpack and node to build assets, phoenix_static_buildpack
is not necessary any more. We can just get rid of it.
Also we need to make sure the assets.deploy
task is run during deployment. We can use hook_post_compile
in Elixir buildpack for that.
Just add this line in your elixir_buildpack.config
:
hook_post_compile="eval mix assets.deploy && rm -f _build/esbuild"
With those two changes, my app can be deployed to Gigalixir without any issue.
If you’d like to deploy a new Phoenix 1.6 app to Gigalixir, check out the full guide.
I’d like to quickly mention that Phoenix 1.6 also ships with two new generators:
phx.gen.auth
generates a complete authentication solution for your applicationphx.gen.notifier
generates a notifier for sending emailsIn my little app, I didn’t need to use them. But if you do need an authentication solution or a notifier, check them out.
The release of Phoenix 1.6 is quite exciting, because by default webpack and node is replaced by esbuild. Now we can focus on developing the application in Elixir and Phoenix, without wasting time on breaking changes and nonsense security alerts in node dependencies.
In this post, I walked through how I upgraded my humble little Phoenix app to 1.6, with a focus on updating the asset pipeline to use esbuild and migrating to HEEx
templates. Hope this helps someone who’s trying to do the same.
Phoenix 1.6 do have other new features that are not covered in this post. For those, make sure to check out the release note.
]]>In this post, I’ll document what I consider the best way to setup a brand new Mac for software development. The primary purpose is to serve as a reference for my future self, but if some readers find it useful, that would be awesome too.
These are just based on my personal experience, so there is no guarantee they’ll work well for you too. If you find other better ways to do certain steps, please let me know in the public comment below or reach out to me directly.
I use Homebrew to install and manage most of command line tools and GUI apps.
Install it with:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
This script is quite intelligent. It works on Intel or Apple Silicon based Macs and even Linux, and it installs Homebrew to different preferred prefixes based on the situation. For more information, please refer to the Homebrew Installation guide.
Manually installing all the packages needed on a new Mac is tedious, but luckily we don’t have to do that thanks to Homebrew Bundle.
One can create a Brewfile
with brew bundle dump
and then run brew bundle
to install and upgrade all packages from the Brewfile
. For more details, please refer to the brew bundle
section of the brew man
output or brew bundle --help
.
I’ve saved a Brewfile
to my dotfiles
repository on Github, so I can just download it with:
curl -fL -o Brewfile https://raw.githubusercontent.com/wiserfirst/dotfiles/master/Brewfile
And then run brew bundle
to install the packages.
The next couple steps involve cloning from Github, so generating a new SSH key and adding it to my Github account is necessary.
ssh-keygen -t ed25519 -C "your_email@example.com"
Reference: Generating a new SSH key and adding it to the ssh-agent
First copy your SSH public key to clipboard with
pbcopy < ~/.ssh/id_ed25519.pub
Then login to your Github account, go to Settings -> SSH and GPG keys -> New SSH key. Give it a title and paste your key into the “Key” field.
Reference: Adding a new SSH key to your Github account
git clone git@github.com:wiserfirst/dotfiles.git
cd dotfiles
ruby ./install.rb
git clone git@github.com:wiserfirst/maximum-awesome.git
cd maximum-awesome
git checkout qing
rake
For installing Vim plugins separately, just run :PlugInstall
in Vim.
git clone https://github.com/asdf-vm/asdf.git ~/.asdf --branch v0.8.1
If you don’t already have this in your zshrc, the following is needed:
echo -e '\n. $HOME/.asdf/asdf.sh' >> ~/.zshrc
echo -e '\n. $HOME/.asdf/completions/asdf.bash' >> ~/.zshrc
Now I’d like to install Erlang and Elixir:
asdf plugin-add erlang
asdf plugin-add elixir
# actual Openssl version depends on what's in `brew list`
export KERL_CONFIGURE_OPTIONS="--without-javac --with-ssl=$(brew --prefix openssl@1.1)"
asdf install erlang 23.3.4
asdf install elixir 1.12.3
asdf global erlang 23.3.4
asdf global elixir 1.12.3
Obviously you could install whatever programming languages you need, be that Ruby, Node.js, Python or something else.
For more details on how to do that with asdf, check out my comprehensive guide: How to Use asdf Version Manager on macOS.
git clone --recursive https://github.com/sorin-ionescu/prezto.git "${ZDOTDIR:-$HOME}/.zprezto"
setopt EXTENDED_GLOB
for rcfile in "${ZDOTDIR:-$HOME}"/.zprezto/runcoms/^README.md(.N); do
ln -s "$rcfile" "${ZDOTDIR:-$HOME}/.${rcfile:t}"
done
sudo chsh -s /bin/zsh
git clone --depth 1 https://github.com/junegunn/fzf.git ~/.fzf
~/.fzf/install
If you are not ready to upgrade to the latest version of macOS, you can stop it from showing up in System Preferences -> Software Update with:
sudo /usr/sbin/softwareupdate --ignore "macOS [version name]"
Here version name
could be Catalina
, Big Sur
or Monterey
, depending on which you’d like to ignore.
When you are ready to install the new version, just restore it with:
sudo /usr/sbin/softwareupdate --reset-ignored
After following the steps in this post, there may be things you still need to install or tweak, but the new Mac should be fairly close to be ready as the primary development machine.
Surely, these steps are going to evolve over time and I’ll try my best to keep them up-to-date. But again, I don’t do this very often, so they may get out of date.
Anyway, please feel free to take what you need and let me know what you think :slightly_smiling_face:
]]>In this post I talks about how I built a function to do that and also compared with the _.sample
function from the popular Javascript library lodash.
I’m building it in Typescript, since the project I am working on already uses Typescript.
The function takes an array and returns a random element from it. The type of array element doesn’t matter here. So the function signature should be something like this:
(arr: any[]) => any
For an array, the lowest valid index is 0
and the highest valid index is arr.length - 1
. I could generate an random integer between 0
and arr.length - 1
. Then if I use that number as index for the array, I could get back a random element. That should look like this:
Math.floor(Math.random() * arr.length)
So we have our function:
const getRandomElement = (arr: any[]) =>
arr[Math.floor(Math.random() * arr.length)]
It does the job and it’s fairly straightforward. But there is actually a small issue, which I’ll address in later.
_.sample
After implementing it, I thought that there is a _.sample
function from the popular library lodash, it would be interesting to see how it was implemented.
I’m going to look at the latest version of lodash, which is 4.17.21 as of this writing.
The _.sample
function looks like this:
/**
* Gets a random element from `collection`.
*
* @static
* @memberOf _
* @since 2.0.0
* @category Collection
* @param {Array|Object} collection The collection to sample.
* @returns {*} Returns the random element.
* @example
*
* _.sample([1, 2, 3, 4]);
* // => 2
*/
function sample(collection) {
var func = isArray(collection) ? arraySample : baseSample;
return func(collection);
}
Apparently it handles not only arrays but also collections in general.
I won’t worry about baseSample
for collections that are not array in this article. But if you like, feel free to check out its source.
For arrays, it delegates to a arraySample
function, which looks like this:
/**
* A specialized version of `_.sample` for arrays.
*
* @private
* @param {Array} array The array to sample.
* @returns {*} Returns the random element.
*/
function arraySample(array) {
var length = array.length;
return length ? array[baseRandom(0, length - 1)] : undefined;
}
So it checks the array length, and if length is 0
it returns undefined
. Otherwise it’s very similar to my implementation: it calls baseRandom
with 0
and length - 1
to (presumably) get a random integer, then use it as index for the array and gets a random element from the array.
In order for this to work, the baseRandom
function must generate a random integer between its two arguments.
/**
* The base implementation of `_.random` without support for returning
* floating-point numbers.
*
* @private
* @param {number} lower The lower bound.
* @param {number} upper The upper bound.
* @returns {number} Returns the random number.
*/
function baseRandom(lower, upper) {
return lower + nativeFloor(nativeRandom() * (upper - lower + 1));
}
And looks like it does!
nativeFloor
and nativeRandom
in here are just aliases for the built-in methods Math.Floor
and Math.random
.
I’m glad to see that the core logic of _.sample
for arrays is very similar to the getRandomElement
function I got. However, it did check the array length, which I forgot to do.
In my use case, getRandomElement
is just used as a test helper and the same non-empty array constant is always passed in, so omitting the array length check isn’t causing any issue for now.
But for completeness’ sake and also to allow it to be used in other scenarios in the future, I should add the array length check. So the function becomes:
const getRandomElement = (arr: any[]) =>
arr.length ? arr[Math.floor(Math.random() * arr.length)] : undefined
In this article, I walked through how to implement a function for getting a random element from an array, had a tour in lodash source code around _.sample
and improved the initial implementation by adding an array length check to make it more complete and robust.
Open source not only come in handy when we need to use them directly in our projects, but also serve as a great resource for learning.
I definitely should look more into source code of open source libraries and frameworks. Maybe you should too :smile:
]]>Before we begin, let’s talk about why we might need it in the first place.
Say you work as a developer for a company and their tech stack is backend Ruby on Rails and frontend React. There are quite a number of repositories for different services and unsurprisingly not all of them use the same versions of Ruby or Node.js.
To manage the different versions of Ruby, rbenv is a good tool and for Node.js, you have nvm. Then Python is introduced for some machine learning related tasks, so here comes pyenv.
Three tools to manage versions for three programming languages doesn’t sound too bad, but they all have slightly different command syntax for you to remember and use from time to time. The situation only gets worse with more languages introduced to the mix. For example, what if you want to build a side project with Elixir/Phoenix or learn some Rust.
One version manager for each programming language is still okay for three languages, but once the number reaches five or six, it becomes too much effort.
Luckily there is asdf and you can replace rbenv
, nvm
, pyenv
and more with just this one tool.
Thanks to its plugin system, asdf is extendable enough for you to install and manage versions of almost all programming languages that you might want to use. And with asdf you only need to learn one set of simple commands to do that.
Furthermore, if you’d like to manage something and there isn’t yet a plugin for it, it’s possible to create a plugin yourself.
With a relatively small core and the powerful plugin system, asdf offers nearly infinite possibilities.
First make sure that coreutils
, curl
and git
are installed:
brew install coreutils curl git
Personally I prefer installing asdf with Git, because it gives complete control and avoids some pitfalls.
Cloning the latest tag is enough:
git clone https://github.com/asdf-vm/asdf.git ~/.asdf --branch v0.8.1
v0.8.1
is the latest tag as of September 2021, but obviously that would change over time, so make sure to check its Github repository for that before you install.
Then for Zsh add the following to the bottom of ~/.zshrc
:
. $HOME/.asdf/asdf.sh
Open a new terminal tab and you should be ready to use asdf :tada:
The alternative is to install asdf with Homebrew:
brew install asdf
If you prefer this method, before continuing, do check out Common Homebrew issues to be aware of potential issues you might run into.
And add the following line to the bottom of your ~/.zshrc
:
. $(brew --prefix asdf)/asdf.sh
If you use Bash or Fish shell, please refer to the Add to your Shell section in asdf documentation for instructions.
Before you could install Ruby, Node.js or anything else, you’ll need to add the appropriate plugins. Plugins are how asdf understands handling of different programming languages or, say, packages.
There is an asdf plugins repository and for all the plugins listed there, you can add with just the plugin name. For example, here is how to add the plugins for Ruby and Node.js:
asdf plugin add ruby
asdf plugin add nodejs
If the plugin you want is not part of this repository, you can still add it with its repository URL. For example:
asdf plugin-add elm https://github.com/vic/asdf-elm
You can list installed plugins with:
asdf plugin list
Or list all available plugins from the asdf plugin repository:
asdf plugin list-all
If you’ve looked through the asdf plugin repository, you may have noticed that there are plugins not only for programming languages, but also for many other cli tools like fzf
, minikube
etc.
For the purpose of our discussion here, whether it’s a programming language or something else doesn’t really matter, because the commands for managing them are going to be the same. I’ll just refer to them as programming languages in this post, but please keep in mind that you could use asdf to manage other cli tools as well.
Suppose we want to install the latest stable release of Ruby 2 and the latest LTS release of Node.js, which are 2.7.2
and 14.16.1
respectively as of this writing. We can simply run the following:
asdf install ruby 2.7.2
asdf install nodejs 14.16.1
When you run into issues trying to install a particular language version, make sure to check out the Github repository for the plugin. It’s very likely that you’ll find instructions on how to solve those issues.
After installing the first versions, you might also want to set them as global versions for Ruby and Node.js:
asdf global ruby 2.7.2
asdf global nodejs 14.16.1
With this, we’ve made Ruby 2.7.2
and Node.js 14.16.1
“globally” available for the current user.
In asdf terms, “global” means default everywhere. So unless it’s overridden with either a local or shell version, which are covered in the following sections, asdf will assume the global version is the one to use.
Suppose we have a legacy project that we need to maintain and it only runs on Node.js 10. What we can do with asdf is to install Node.js 10 and set a local version in the project directory:
asdf install nodejs 10.22.0
# run in the project directory
asdf local nodejs 10.22.0
With this local version set, when you are in the legacy project directory or its subdirectories, asdf will automatically switch to Node.js version 10.22
; when you are in any other directories, it’ll fallback to the global Node.js version, unless of course if there is another local Node.js version set.
I had a fairly interesting situation at work recently. On this project, the backend server and frontend client each lives in a subdirectory in the same repository and we are in the process of developing a new client app to replace the old one.
Normally I just run the server and new client, both of which run on Node.js 14. This time I needed to run the old client to confirm some behaviours on a page, but it requires Node.js 10.
In order to run the old client together with the server, I made another copy of the whole project, set a local Node.js version to 10.22.0
in the new directory and run the old client. For the server, since the local Node.js version is already set to 14.16.1
in the original project directory, I could still start it in as normal.
That certainly worked fine for me. But later I learned that there is a much simpler way: to use an asdf shell version. Without making an extra copy of the project, I could simply start a new shell session in the project directory and set a shell version for Node.js by:
cd path/to/project
asdf shell nodejs 10.22.0
# run old client
This shell version only affects the current shell session, nothing else.
As for the server, just run it in another shell session would do.
So basically asdf allows you to select different versions of programming languages on a per directory basis, and on top of that you have the option to set a shell version which only affects the current shell session.
I think that should be flexible enough for anyone to cope with most of (if not all) the situations they’ll ever encounter.
To someone who’s new that might sound like magic, but in fact how asdf works is actually quite straightforward.
When you set a global version for a programming language, it’ll add or update a line for the language in a .tool-versions
file under the current user’s home directory. If the file doesn’t already exist, it’ll create it first and then add the new line.
If you’ve followed this post to install asdf, install Ruby and Node.js, and then set the global versions, your .tool-versions
file in home directory should look like the following:
# cat ~/.tool-versions
nodejs 14.16.1
ruby 2.7.2
When you set a local version in a directory, asdf will add or update a line for the language in a .tool-versions
file under that directory. Same as the global .tool-versions
file, it’ll be created if not exist already.
Say you do have that legacy project where Node.js 10.22
is required and therefore you’ve set a local version for Node.js in the project directory. The .tool-versions
file under the project directory should look like this:
# cd path/to/project
# cat ~/.tool-versions
nodejs 10.22.0
If you’re working on a personal project or your team has adopted asdf, it would be a very good idea to commit the .tool-version
file to Git or the version control system you use.
On the other hand, if your team hasn’t reached an agreement on adopting asdf, I’d recommend adding it to .gitignore
and keeping it locally without committing to version control. The Migrate from Legacy Tools section might offer more useful information, if you found yourself in situations like this.
How shell versions work is even simpler in my opinion. When you set one, asdf will set an environment variable ASDF_${LANG}_VERSION
for the current session.
For example, when I set a shell version for Node.js to 10.22.0
, asdf creates an environment variable named ASDF_NODEJS_VERSION
with value 10.22.0
in my shell session.
Given that’s how it works, setting the environment variable for a particular language directly in a shell session or even for just one command would work too.
The following example starts the Rails server with Ruby version 2.5.3
:
ASDF_RUBY_VERSION=2.5.3 bundle exec rails server
When you run node
for example, asdf will look for a .tool-versions
file in the current directory, then the parent directory, then parent’s parent directory etc. If it does find one and a local Node.js version is specified, it’ll use that version. In the case it couldn’t find one, it’ll fallback to the global version set in the .tool-versions
file under the current user’s home directory. So the logic is quite straightforward.
You could run asdf current
to get a list of current versions of installed programming languages in the current directory. For example, say we are in the legacy project directory, where a local Node.js version is set, but no Ruby version is set. What you get should be something like this:
# asdf current
nodejs 10.22.0 /Users/username/workspace/legacy-project/.tool-versions
ruby 2.7.2 /Users/username/.tool-versions
Whereas if you run it in another directory, assuming no local versions are set, what you get is slightly different:
# asdf current
nodejs 14.16.1 /Users/username/.tool-versions
ruby 2.7.2 /Users/username/.tool-versions
As you can see, in the output, it not only tells you what the current versions are, but also shows from which .tool-versions
file asdf got each version.
I reckon this command could come in handy when you try to figure out where a particular local version is set.
Coming back to the scenario mentioned at the beginning of this article, where you work for a company which uses Ruby on Rails for backend and React for frontend and different projects might have different language version requirements.
After introducing asdf you no longer have to deal with different tools for managing versions of different programming languages, which is great. But obviously when starting to work on a different project for the first time, everyone still need to get the correct local versions installed.
What I used to do is to check out what are the versions specified in the .tool-versions
file of the project and then manually install them. For example, if the file has
nodejs 10.22.0
ruby 2.5.3
Running the following should do it:
asdf install nodejs 10.22.0
asdf install ruby 2.5.3
But this feels slightly tedious and it is.
Luckily as it turns out there is a much better way: running asdf install
without any arguments. If a required version is not installed yet, asdf will go ahead and install it; if a version is already installed, it will tell you that and does nothing.
I think this is a rather neat trick for installing the specified local versions for a project, therefore it makes this aspect of onboarding new team members to a project pretty much painless.
If you are sold on asdf but for whatever reason can’t adopt it at work, there is a configuration option that could allow you to still use it.
What you need to do is to create a .asdfrc
file in your home directory with the following content:
legacy_version_file = yes
Setting this to yes
will cause asdf plugins to read “legacy” version files, for example .ruby-versions
for Ruby and .nvmrc
or .node-versions
for Node.js.
This is especially helpful when you are in a team where your teammates don’t want to change to a different tool for managing the programming languages. Change is hard even when there are obviously benefits, so expect resistance if people are not already familiar with asdf.
With this setting though, they can continue to use the legacy tools they prefer, but you would have the option to use asdf if you want.
Note: not all plugins support this feature. If you rely on this behaviour, please do check the documentation of the plugins you use.
Hopefully one day they’ll start noticing conveniences of asdf and change their minds, at which point the whole team could fully adopt asdf and enjoy the benefits it brings.
In this post, I covered:
While there are definitely aspects of asdf that I didn’t cover, this should be a solid starting point for someone new. After reading this post and following along, you should be able to start using asdf with confidence. If you do run into issues, check out asdf documentation and Google is your friend.
With asdf, one could manage different versions of all the programming languages that they might need without any trouble, and also it makes sharing a common set of programming language versions across a team for a project very easy.
Because of the conveniences it offers, I’m a big fan of asdf and I truly believe that every developer should use it or at least know it as a potential option to consider.
]]>In order to save some space, I’d like to convert the .mov files to .mp4.
There are various online tools that I can use for the conversion, but uploading the original videos and downloading the resulting videos would take a long time, especially considering I have multiple videos that are several Gigabytes. So the online tools aren’t right for me.
Luckily there is a neat cli tool named ffmpeg that can do the trick. If you don’t already have it, you can install (on macOS) by
brew install ffmpeg
Or if you are on Linux, most likely you can install it with your package manager; if not, go to its download page to find the appropriate installer.
To convert a .mov file to .mp4, you can run
ffmpeg -i input-video-name.mov -vcodec h264 output-video-name.mp4
For more details, please refer to the ffmpeg documentation.
This is good enough if there are only a handful of videos to convert, but it can become tedious to run the command manually for say 20 videos. So I created a quick and dirty Ruby script for converting all the .mov or .avi videos in a directory. And yes, thanks to ffmpeg, the same command can work with .avi videos as well.
#!/usr/bin/env ruby
require 'shellwords'
def usage
puts <<~HEREDOC
Usage:
./video-converter.rb [dir]
to convert mov/avi files to mp4 with H.264 video codec and AAC audio codec
HEREDOC
end
if ARGV.length > 1
usage
exit 1
end
dir = ARGV.length == 1 ? ARGV[0] : '.'
unless Dir.exist?(dir)
puts "\e[31mDirectory #{dir} not found\e[0m"
exit 1
end
output_dir = File.join(dir, 'output')
Dir.chdir(dir)
Dir.mkdir(output_dir) unless Dir.exist?(output_dir)
Dir.glob('*.{avi,mov,mp4}') do |original_filename|
basename = File.basename(original_filename, '.*')
filename = Shellwords.escape(original_filename)
output_path = File.join('output', "#{basename}.mp4") # Keep for display
escaped_output_path = Shellwords.escape(output_path) # Use for command
puts "\n\e[32mConverting #{original_filename} to #{output_path}\e[0m"
system("ffmpeg -i #{filename} -vcodec h264 -acodec aac #{escaped_output_path}")
end
Note: Script updated on 2024-02-26 to: a) create an output directory if it does not already exist, b) escape special characters (such as spaces) for shell commands, and c) add support for converting .mp4 files in addition to .mov and .avi files.
I understand there are various ways to improve this script to make it more flexible/robust, but for now this is good enough for my purpose and hopefully it is useful for someone else too.
]]>The legacy SnipMate parser is deprecated. Please see :h SnipMate-deprecate
If you follow the instruction and run :h SnipMate-deprecate
, you’ll see the following in a help window:
The legacy parser, version 0, is deprecated. It is currently still the default parser, but that will be changing. NOTE that switching which parser you use could require changes to your snippets–see the previous section.
To continue using the old parser, set g:snipMate.snippet_version (see |SnipMate-options|) to 0 in your |vimrc|.
Setting g:snipMate.snippet_version to either 0 or 1 will remove the start up message. One way this can be done–to use the new parser–is as follows:
let g:snipMate = { ‘snippet_version’ : 1 }
Basically there is a new parser in snipMate, but the deprecated legacy parser is still the default, which would cause this warning. Explicitly setting the parser version to either 0 for the old parser or 1 for the new parser would remove this start up warning message.
There doesn’t seem to be a reason not to use the new parser, so I just added the following in my .vimrc
:
let g:snipMate = { 'snippet_version' : 1 }
Now the annoying warning upon starting Vim is gone :tada:
]]>git pull # or git fetch
git checkout <branch-name>
This works because git uses what’s called “remote references” to keep track of the last known state of remote branches, which are essentially read-only bookmarks. In this case, git pull
would create a new remote reference for the new remote branch apart from updating existing remote references. Then git checkout ...
would create a new local branch that tracks the new remote branch and switch to it.
That’s all well and good until there are too many branches in the codebase, which is not at all uncommon when working in a reasonably sized team. Git automatically creates remote references for all known remote branches, but it doesn’t automatically remove stale remote references when the remote branches are deleted. This annoys me because the stale remote references might mess with my auto-completion for branch names. After some Googling, I managed to find a way to remove them for the default remote connection origin
:
git remote prune origin
Also the following command lists remote references:
git remote show origin
After sharing my findings with my colleagues, they pointed out that passing the --prune
option to git pull
or git fetch
would do the trick as well. As mentioned in this nice tutorial for git prune, the following:
git fetch --prune
is the same as:
git fetch --all && git remote prune