Erlang

Organisation:Copyright (C) 2021-2024 Olivier Boudeville
Contact:about (dash) howtos (at) esperide (dot) com
Creation date:Saturday, November 20, 2021
Lastly updated:Thursday, February 1, 2024

Overview

Erlang is a concurrent, functional programming language available as free software; see its official website for more details.

Erlang is dynamically typed, and is executed by the BEAM virtual machine. This VM (Virtual Machine) operates on bytecodes and can perform Just-In-Time compilation. It powers also other related languages, such as Elixir and LFE.

Let's Start with some Shameless Advertisement for Erlang and the BEAM VM

Taken from this presentation:

Hint

What makes Elixir StackOverflow’s #4 most-loved language?

What makes Erlang and Elixir StackOverflow’s #3 and #4 best-paid languages?

How did WhatsApp scale to billions of users with just dozens of Erlang engineers?

What’s so special about Erlang that it powers CouchDB and RabbitMQ?

Why are multi-billion-dollar corporations like Bet365 and Klarna built on Erlang?

Why do PepsiCo, Cars.com, Change.org, Boston’s MBTA, and Discord all rely on Elixir?

Why was Elixir chosen to power a bank?

Why does Cisco ship 2 million Erlang devices each year? Why is Erlang used to control 90% of Internet traffic?

Installation

Erlang can be installed thanks to the various options listed in these guidelines.

Building Erlang from the sources of its latest stable version is certainly the best approach; for more control we prefer relying on our custom procedure.

For a development activity, we recommend also specifying the following options to our conf/install-erlang.sh script:

Run ./install-erlang.sh --help for more information.

Once installed, ensure that ~/Software/Erlang/Erlang-current-install/bin/ is in your PATH (e.g. by enriching your ~/.bashrc accordingly), so that you can run erl (the Erlang interpreter) from any location, resulting a prompt like:

$ erl
Erlang/OTP 24 [erts-12.1.5] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:1] [jit]

Eshell V12.1.5  (abort with ^G)
1>

Then enter CTRL-C twice in order to come back to the (UNIX) shell.

Congratulations, you have a functional Erlang now!

To check from the command-line the version of an Erlang install:

$ erl -eval '{ok, V} = file:read_file( filename:join([code:root_dir(), "releases", erlang:system_info(otp_release), "OTP_VERSION"]) ), io:fwrite(V), halt().' -noshell
24.2

Ceylan's Language Use

Ceylan users shall note that most of our related developments (namely Myriad, WOOPER, Traces, LEEC, Seaplus, Mobile, US-Common, US-Web and US-Main) depart significantly from the general conventions observed by most Erlang applications:

Using the Shell

If it is as simple to run erl, we prefer, with Ceylan settings, running make shell in order to benefit from a well-initialized VM (notably with the full code path of the current layer and the ones below).

Refer then to the shell commands, notably for:

1> rr(code:lib_dir(xmerl) ++ "/include/xmerl.hrl").

See also the JCL mode (for Job Control Language) to connect and interact with other Erlang nodes.

About Security

OTP Guidelines

The .app Specification

For an overall application named Foobar, we recommend defining in conf/foobar.app.src an application specification template that, once properly filled regarding the version of that application and the modules that it comprises (possibly automatically done thanks to the Ceylan-Myriad logic), will result in an actual application specification file, foobar.app.

Such a file is necessary in all cases, to generate an OTP application (otherwise with rebar3 nothing will be built), an OTP release (otherwise the application dependencies will not be reachable), and probably an hex package as well.

This specification content is to end up in various places:

  • in ebin/foobar.app
  • if using rebar3, in the OTP build tree (by default: ./_build/lib/foobar/ebin/foobar.app)
  • with src/foobar.app.src being a symbolic link pointing to ebin/foobar.app (probably at least for hex)

Starting OTP Applications

For an OTP active application of interest - that is one that provides an actual service, i.e. running processes, as opposed to a mere library application, which provides only code - such a specification defines, among other elements, which module will be used to start this application. We recommend to name this module according to the target application and to suffix it with _app, like in:

{application, foobar, [
    [...]
    {mod, {foobar_app, [hello]}},
    [...]

This implies that once a user code will call application:start(foobar), then foobar_app:start(_Type=normal, _Args=[hello]) will be called in turn.

This start/2 function, together with its stop/1 reciprocal, are the functions listed by the OTP (active) application behaviour; at least for clarity, it is better that foobar_app.erl comprises -behaviour(application).

Pre-Launch Functions

The previous OTP callbacks may be called by specific-purpose launching code; we tend to define an exec/0 function for that: then, with the Myriad make system, executing on the command-line make foobar_exec results in foobar_app:exec/0 to be called.

Having such a pre-launch function is useful when having to set specific information beforehand (see application:set_env/{1,2}) and/or when starting by oneself applications (e.g. see otp_utils:start_applications/2).

In any case this should result in foobar_app:start/2 to be called at application startup, a function whose purpose is generally to spawn the root supervisor of this application.

Note that, alternatively (perhaps for some uncommon debugging needs), one may execute one's application (e.g. foo) by oneself, knowing that doing so requires starting beforehand the applications it depends on - be them Erlang-standard (e.g. kernel, stdlib) or user-provided (e.g. bar, buz); for that both their modules [3] and their .app file [4] must be found.

[3]If using Ceylan-Myriad, run, from the root of foo, make copy-all-beams-to-ebins to populate the ebin directories of all layers (knowing that by default each module is only to be found directly from its source/build directory, and thus such a copy is usually unnecessary).
[4]If using Ceylan-Myriad, run, from the root of foo, make create-app-file.

This can be done with:

$ erl -pa XXX/bar/ebin -pa YYY/buz/ebin -pa ZZZ/foo/ebin

Erlang/OTP 26 [erts-14.2] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:1] [jit:ns]

Eshell V14.2 (press Ctrl+G to abort, type help(). for help)
1> application:ensure_all_started([kernel, stdlib, bar, buz, foo]).

Then the foo application shall be launched, and a shell be available to interact with the corresponding VM.

OTP Supervisors

The purpose of supervisors is to ease the development of fault-tolerant applications by building hierarchical process structures called supervision trees.

For that, supervisors are to monitor their children, that may be workers (typically implementing the gen_{event,server,statem} behaviour) and/or other supervisors (they can thus be nested).

We recommend to define a foobar_sup:start_link/0 function (it is an user-level API, so any name and arity can be used). This foobar_sup module is meant to implement the supervisor behaviour (to be declared with -behaviour(supervisor).), which in practice requires an init/1 function to be defined.

So this results, in foobar_sup, in a code akin to:

-spec start_link() -> supervisor:startlink_ret().
start_link() ->
   % This will result in calling init/1 next:
   supervisor:start_link( _Registration={local, my_foobar_main_sup},
                          _Mod=?MODULE, _Args=[]).

-spec init(list()) -> {'ok', {supervisor:sup_flags(), [child_spec()]}}.
init(_Args=[]) ->
   [...]
   {ok, {SupSettings, ChildSpecs}}.

Declaring Worker Processes

Our otp_utils module may help a bit defining proper restart strategies and child specifications, i.e. the information regarding the workers that will be supervised, here, by this root supervisor.

For example it could be:

init(_Args=[]) ->
   [...]
   SupSettings = otp_utils:get_supervisor_settings(
                   _RestartStrategy=one_for_one, ExecTarget),
   % Always restarted in production:
   RestartSettings = otp_utils:get_restart_setting(ExecTarget),
   WorkerShutdownDuration =
       otp_utils:get_maximum_shutdown_duration(ExecTarget),
   % First child, the main Foobar worker process:
   MainWorkerChild = #{
       id => foobar_main_worker_id,
       start => {_Mod=foobar, _Fun=start_link,
                 _MainWorkerArgs=[A, B, C]},
       restart => RestartSettings,
       shutdown => WorkerShutdownDuration,
       type => worker,
       modules => [foobar] },
   ChildSpecs = [MainWorkerChild],
   {ok, {SupSettings, ChildSpecs}}.

Children are created synchronously and in the order of their specification [5].

[5]Yet some interleaving is possible thanks to proc_lib:init_ack/1.

So if ChildSpecs=[A, B, C], then a child according to the A spec is first created, then, once it is over (either its init/1 finished successfully, or it called proc_lib:init_ack/{1,2} [6] and then continued its own initialisation concurrently), a child according to the B spec is created, then once done a child according to the C spec.

[6]Typically: proc_lib:init_ack(self()).

Implementing Worker Processes

Such a worker, which can be any Erlang process (implementing an OTP behaviour, like gen_server, or not) will thus be spawned here through a call to the foobar:start_link/3 function (another user-defined API) made by this supervisor. This is a mere call (an apply/3), not a spawn of a child process based on that function.

Therefore the called function is expected to create the worker process by itself, like, in the foobar module:

start_link(A, B ,C) ->
   [...]
   {ok, proc_lib:start_link(?MODULE, _Func=init,
       _Args=[U, V], _Timeout=infinity, SpawnOpts)}.

Here thus the spawned worker will start by executing foobar:init/2, a function not expected to return, often trapping EXIT signals (process_flag(trap_exit, true)), setting system flags and, once properly initialised, notifying its supervisor that it is up and running (e.g. proc_lib:init_ack(_Return=self())) before usually entering a tail-recursive loop.

Terminating Workers

Depending on the shutdown entry of its child specification, on application stop that worker may be terminated by different ways. We tend to prefer specifying a maximum shutdown duration: then the worker will be sent by its supervisor first a shutdown EXIT message, that this worker may handle, typically in its main loop:

receive
   [...]

   {'EXIT', _SupervisorPid, shutdown} ->
      % Just stop.
      [...]

If the worker fails to stop (on time, or at all) and properly terminate, it will then be brutally killed by its supervisor.

Supervisor Bridges

Non-OTP processes (e.g. WOOPER instances) can act as supervisors thanks to the supervisor_bridge module.

Such a process shall implement the supervisor_bridge behaviour, namely init/1 and terminate/2. If the former function spawns a process, the latter shall ensure that this process terminates in a synchronous manner, otherwise race conditions may happen.

See traces_bridge_sup for an example thereof.

Extra Information

One may refer to;

More Advanced Topics

Metaprogramming

Metaprogramming is to be done in Erlang through parse transforms, which are user-defined modules that transform an AST (for Abstract Syntax Trees, an Erlang term that represents actual code; see the Abstract Format for more details) into another AST that is fed afterwards to the compiler.

See also:

Improper Lists

A proper list is created from the empty one ([], also known as "nil") by appending (with the | operator, a.k.a. "cons") elements in turn; for example [1,2] is actually [1 | [2 | []]].

However, instead of enriching a list from the empty one, one can start a list with any other term than [], for example my_atom. Then, instead of [2|[]], [2|my_atom] may be specified and will be indeed a list - albeit an improper one.

Many recursive functions expect proper lists, and will fail (typically with a function clause) if given an improper list to process (e.g. lists:flatten/1).

So, why not banning such construct? Why even standard modules like digraph rely on improper lists?

The reason is that improper lists are a way to reduce the memory footprint of some datastructures, by storing a value of interest instead of the empty list.

Indeed, as explained in this post, a (proper) list of 2 elements will consume:

  • 1 list cell (2 words of memory) to store the first element and a pointer to second cell
  • 1 list cell (2 more words) to store the second element and the empty list

For a total of 4 words of memory (so, on a 64-bit architecture, it is 32 bytes).

As for an improper list of 2 elements, only 1 list cell (2 words of memory) will be consumed to store the first element and then the second one.

Such a solution is even more compact than a pair (a 2-element tuple), which consumes 2+1 = 3 words. Accessing the elements of an improper list is also faster (one handle to be inspected vs also an header to be inspected).

Finally, for sizes expressed in bytes:

1> system_utils:get_size([2,my_atom]).
40

2> system_utils:get_size({2,my_atom}).
32

3> system_utils:get_size([2|my_atom]).
24

See also the 1, 2 pointers for more information.

Everyone shall decide on whether relying on improper lists is a trick, a hack or a technique to prohibit.

OpenCL

Open Computing Language is a standard interface designed to program many processing architectures, such as CPUs, GPUs, DSPs, FPGAs.

OpenCL is notably a way of using one's GPU to perform more general-purpose processing than typically the rendering operations allowed by GLSL (even compared to its compute shaders).

In Erlang, the cl binding is available for that.

A notable user thereof is Wing3D; one may refer to the *.cl files in this directory, but also to its optional build integration as a source of inspiration, and to wings_cl.erl.

Post-Mortem Investigations

Erlang programs may fail, and this may result in mere (Erlang-level) crashes (the VM detects an error, and reports information about it, possibly in the form of a crash dump) or (sometimes, quite infrequently though) in more brutal, lower-level core dumps (the VM crashes as a whole, like any faulty program run by the operating system); this last case happens typically when relying on faulty NIFs.

Erlang Crash Dumps

If experiencing "only" an Erlang-level crash, a erl_crash.dump file is produced in the directory whence the executable (generally erl) was launched. The best way to study it is to use the cdv (refer to crashdump viewer) tool, available, from the Erlang installation, as lib/erlang/cdv [7].

[7]Hence, according to the Ceylan-Myriad conventions, in ~/Software/Erlang/Erlang-current-install/lib/erlang/cdv.

Using this debug tool is as easy as:

$ cdv erl_crash.dump

Then, through the wx-based interface, a rather large number of Erlang-level information will be available (processes, ports, ETS tables, nodes, modules, memory, etc.) to better understand the context of this crash and hopefully diagnose its root cause.

Core Dumps

In the worst cases, the VM will crash like any other OS-level process, and generic (non Erlang-specific) tools will have to be used. Do not expect to be pointed to line numbers in Erlang source files anymore!

Refer to our general section dedicated to core dumps for that.

Language Bindings

The two main approaches in order to integrate third-party code to Erlang are to:

Language Implementation

Message-Passing: Copying vs Sharing

Knowing that, in functional languages such as Erlang, terms ("variables") are immutable, why could not they be shared between local processes when sent through messages, instead of being copied in the heap of each of them, as it is actually the case with the Erlang VM?

The reason lies in the fact that, beyond the constness of these terms, their life-cycle has also to be managed. If they are copied, each process can very easily perform its (concurrent, autonomous) garbage collections. On the contrary, if terms were shared, then reference counting would be needed to deallocate them properly (neither too soon nor never at all), which, in a concurrent context, is bound to require locks.

So a trade-off between memory (due to data duplication) and processing (due to lock contention) has to be found and at least for most terms (excepted larger binaries), the sweet spot consists in sacrificing a bit of memory in favour of a lesser CPU load. Solutions like persistent_term may address situations where more specific needs arise.

Just-in-Time Compilation

This long-awaited feature, named BeamAsm and whose rationale and history have been detailed in these articles, has been introduced in Erlang 24 and shall transparently lead to increased performances for most applications.

Static Typing

Static type checking can be performed on Erlang code; the usual course of action is to use Dialyzer - albeit other solutions like Gradualizer and also eqWAlizer exist, and are mostly complementary (see also 1 and 2).

More precisely:

  • Dialyzer is a discrepancy analyzer that aims to prove the presence of type-induced runtime crashes (it may not be able to detect all type problems, yet "is never wrong"); Dialyzer does not use type specifications to guide the analysis: first it infers type information, and then only, if requested, it checks that information against the type specifications; so Dialyzer may operate with or without type specifications
  • whereas tools like Gradualizer and eqWAlizer are more conventional type systems, based on the theory of gradual typing, that aim to prove the absence of such crashes; notably Gradualizer depends intimately on type specifications: by default, without them, no static typing happens

See EEP 61 for further typing-related information [8].

[8]On a side note, the (newer) dynamic() type mentioned there is often used to mark "inherently dynamic code", like reading from ETS, message passing, deserialization and so on.

Also a few statically-typed languages can operate on top of the Erlang VM, even if none has reached yet the popularity of Erlang or Elixir (that are dynamically-typed).

In addition to the increased type safety that statically-typed languages permit (possibly applying to sequential code but also to inter-process messages), it is unsure whether such extra static awareness may also lead to better performances (especially now that the standard compiler supports JIT).

Beyond mere code, the messages exchanges between processes could also be typed and checked. Version upgrades could also benefit from it. Of course type-related errors are only a subset of the software errors.

Note that developments that rely on parse-transforms (almost all ours, directly or not) shall be verified based on their BEAM files (hence their actual, final output) rather than on their sources (as the checking would be done on code not transformed yet). See also the Type-checking Myriad section.

About Dialyzer

Installing Dialyzer

Nothing is to be done, as Dialyzer is included in the standard Erlang distribution.

Using Dialyzer

Our preferred options (beyond path specifications of course) are: -Wextra_return -Wmissing_return -Wno_return -Werror_handling -Wno_improper_lists -Wno_unused -Wunderspecs. See the DIALYZER_OPTS variable in Myriad's GNUmakevars.inc and the copiously commented options.

A problem is that typing errors tend to snowball: many false positives (functions that are correct but whose call is not because an error upward in the callgraph) may be reported (leading to the infamous Function f/N has no local return, which does not tell much).

We recommend focusing on the first error reported for each module, and re-running the static analysis once supposedly fixed.

About Gradualizer

Installing Gradualizer

We install Gradualizer that way:

$ cd ~/Software
$ mkdir gradualizer && cd gradualizer
$ git clone https://github.com/josefs/Gradualizer.git
$ cd Gradualizer
$ make escript

Then just ensure that the ~/Software/gradualizer/Gradualizer/bin directory is in your PATH (e.g. set in your .bashrc).

Using Gradualizer

Our preferred options (beyond path specifications of course) are: --infer --fmt_location verbose --fancy --color always --stop_on_first_error. See the GRADUALIZER_OPTS variable in Myriad's GNUmakevars.inc and the copiously commented options..

About eqWAlizer

It is a tool developed in Rust.

Installing eqWAlizer

We install eqWAlizer that way:

$ cd ~/Software
$ mkdir -p eqwalizer && cd eqwalizer
$ wget https://github.com/WhatsApp/eqwalizer/releases/download/vx.y.z/elp-linux.tar.gz
$ tar xvf elp-linux.tar.gz && mkdir -p bin && /bin/mv -f elp bin/

Then just ensure that the ~/Software/eqwalizer/bin directory is in your PATH (e.g. set in your .bashrc).

Using eqWAlizer

We use it out of a rebar3 context.

Settings are to be stored in a JSON file (e.g. conf/foobar-for-eqwalizer.json), to be designated thanks to the --project option.

See also the EQWALIZER_OPTS variable in Myriad's GNUmakevars.inc and its own myriad-for-eqwalizer.json project file.

(our first test was not successful, we will have to investigate more when time permits)

Software Robustness

Type correctness is essential, yet of course it does not guarantee that a program is correct and relevant. Other approaches, like the checking of other properties (notably concurrency, see Concuerror) can be very useful.

Beyond checking, testing is also an invaluable help for bug-fixing. Various tools may help, including QuickCheck.

Finally, not all errors can be anticipated, from network outages, hardware failures to human factor. An effective last line of defence is to rely on (this time at runtime) supervision mechanisms in order to detect any kind of faults (bound to happen, whether expected or not), and overcome them. The OTP framework is an excellent example of such system, much useful to reach higher levels of robustness, including the well-known nine nines - that is an availability of 99.9999999%.

Intermediate Languages

To better discover the inner workings of the Erlang compilation, one may look at the eplaypen online demo (whose project is here) and/or at the Compiler Explorer (which supports the Erlang language among others).

Both of them allow to read the intermediate representations involved when compiling Erlang code (BEAM stage, erl_scan, preprocessed sources, abstract code, Core Erlang, Static Single Assignment form, BEAM VM assembler opcodes, x86-64 assembler generated by the JIT, etc.).

Short Hints

Dealing with Conditional Availability

Regarding modules, and from the command-line

Depending on how Erlang was built in a given environment, some modules may or may not be available.

A way of determining availability and/or version of a module (e.g. wx, cl, crypto) from the command-line:

$ erl -noshell -eval 'erlang:display(code:which(wx))' -s erlang halt
"/home/bond/Software/Erlang/Erlang-24.2/lib/erlang/lib/wx-2.1.1/ebin/wx.beam"

$ erl -noshell -eval 'erlang:display(code:which(cl))' -s erlang halt
non_existing

A corresponding in-makefile test, taken from Wings3D:

# Check if OpenCL package is as external dependency:
CL_PATH = $(shell $(ERL) -noshell -eval 'erlang:display(code:which(cl))' -s erlang halt)
ifneq (,$(findstring non_existing, $(CL_PATH)))
   # Add it if not found:
   DEPS=cl
endif

Regarding language features, from code

Some features appeared in later Erlang versions, and may be conditionally enabled.

For example:

FullSpawnOpts = case erlang:system_info(version) >= "8.1" of

   true ->
       [{message_queue_data, off_heap}|BaseSpawnOpts];

   false ->
       BaseSpawnOpts

end,
[...]

Runtime Library Version

To know which version of a library (here, wxWidgets) a given Erlang install is using (if any), one may run an Erlang shell (erl), collect the PID of this (UNIX) process (e.g. ps -edf | grep beam), then trigger a use of that library (e.g. for wx, execute wx:demo().) in order to force its dynamic binding.

Then determine its name, for example thanks to pmap ${BEAM_PID} | grep libwx).

This may indicate that for example libwx_gtk2u_core-3.0.so.0.5.0 is actually used.

Proper Naming

Variable Shorthands

Usually we apply the following conventions:

  • the head and tail of a list are designated as H and T, like in: L = [H|T]
  • Acc means accumulator, in a tail-recursive function
  • K designates a key, and V designates its associated value
  • a list of elements is designated by a plural variable name, usually suffixed with s, like in: Ints, Xs, Cars

Function Pairs

To better denote reciprocal operations, following namings for functions may be used:

  • for services:
    • activation: start / stop
    • setup: init / terminate
  • for instances:
    • life-cycle: new / delete
    • construction/destruction: create / destruct (e.g. avoid destroy there)

Formatting Erlang Code

Various tools are able to format Erlang code, see this page for a comparison thereof.

With rebar3_format

For projects already relying on rebar, one may use rebar3_format (as a plugin [9]) that way:

$ cd ~/Software
$ git clone https://github.com/AdRoll/rebar3_format.git
$ cd rebar3_format
$ ERL_FLAGS="-enable-feature all" rebar3 format
[9]Note that rebar3_format cannot be used as an escript (so no ERL_FLAGS="-enable-feature all" rebar3 as release escriptize shall be issued).

Then {project_plugins, [rebar3_format]} shall be added to the project's rebar.config.

With erlfmt

Another tool is erlfmt, which can be installed that way:

$ cd ~/Software
$ git clone https://github.com/WhatsApp/erlfmt
$ cd erlfmt
$ rebar3 as release escriptize

And then ~/Software/erlfmt/_build/release/bin can be added to one's PATH.

Indeed erlfmt can be used as a rebar plugin or as a standalone escript - which we find useful, especially for projects whose build is not rebar-based.

Running it to reformat in-place source files is then as simple as:

$ erlfmt --write foo.hrl bar.erl

Language Features

Experimental features (such as maybe in Erlang 25) of the compiler (once Erlang has been built, they are potentially available) may have to be specifically enabled at runtime, like in ERL_FLAGS="-enable-feature all" rebar3 as release escriptize.

Disabling LCO

LCO means here Last Call Optimisation. This consists simply when, in a given module, a (typically exported) function f ends by calling a local function g (i.e. has for last expression a call like g(...)), in not pushing on the stack the call to g, but instead replacing directly the stackframe of f (which can be skipped here, as returning from g will mean directly returning from f as well) with a proper one for g.

This trick spares the use of one level of stack at each ending local call, which is key for recursive functions [10] (typically for infinitely-looping server processes): they remain then in constant stack space, whereas otherwise the number of their stackframes would grow indefinitely, at each recursive call, and explode in memory.

[10]When the last call of f branches to f itself, it is named TCO, for Tail Call Optimisation (which is thus a special - albeit essential - case of LCO).

So LCO is surely not an option for a functional language like Erlang, yet it comes with a drawback: if f ends with a last call to g that ends with a last call to h and a runtime error happens in them, none of these functions will appear in the resulting stacktraces: supposing all these functions use a library foobar, it will be as if the VM directly jumped from the entry point in the user code (typically a function calling f) to the failing point in a function of foobar; there will be no line number pointing to the expression of f, g or h that triggers the faulty behaviour, whereas this is probably the information we are mostly interested in - as these functions may make numerous calls to foobar's function. This makes the debugging unnecessarily difficult.

Yet various workarounds exist (see this topic for more information) - just for debugging purposes - so that given "suspicious" functions (here f, g or h) are not LCO'ed:

  • to have their returned values wrapped in a remote call to an identity function (we use basic_utils:identity/1 for that)
  • or to have them wrapped (at least their end, i.e. their last expression(s)) with try ... catch E -> throw(E) end
  • or to return-trace these functions, as it temporarily disables LCO and allows to be very selective (one can limit this to a specific process or certain conditions, with match specs; refer to dbg for more details; note that the module of interest must be compiled with debug_info so that it can be traced)

The first workaround is probably the simplest, when operating on "suspicious" modules of interest (knowing that LCO is useful, and should still apply to most essential server-like processes).

One can nevertheless note that unfortunately LCO is not the only cause for the vanishing of calls in the stacktrace (even in the absence of any function inlining).

Within Ceylan-Myriad, if the (non-default) myriad_disable_lco compilation option is set (typically with the -Dmyriad_disable_lco command-line flag), this workaround is applied automatically on the modules on which the Myriad parse transform operates - i.e. all but bootstrapped modules (refer to the lco_disabling_clause_transform_fun/2 function of the myriad_parse_transform module).

Using wx

wx is now [11] the standard Erlang way of programming one's graphical user interface; it is a binding to the wxWidgets library.

[11]wx replaced gs. To shelter already-developed graphical applications from any future change of backend, we developed MyriadGUI, an interface doing its best to hide the underlying graphical backend at hand: should the best option in Erlang change, that API would have to be ported to the newer backend, hopefully with little to (ideally) no change in the user applications.

Here are some very general wx-related information that may be of help when programming GUIs with this backend:

  • in wxWidgets parlance, "window" designates any kind of widget (not only frame-like elements)
  • if receiving errors about {badarg,"This"}, like in:
{'_wxe_error_',710,{wxDC,setPen,2},{badarg,"This"}}

it is probably the sign that the user code attempted to perform an operation on an already-deallocated wx object; the corresponding life-cycle management might be error-prone, as some deallocations are implicit, others are explicit, and in a concurrent context race conditions easily happen

  • if creating, from a wx-using process, another one, this one should set a relevant environment first (see wx:set_env/1) before using wx functions
  • the way wx/wxWidgets manage event propagation is complex; here are some elements:
    • for each actual event happening, wx creates an instance of wxEvent (a direct, abstract child class of wxObject), which itself is specialised into many event classes that are more concrete
    • among them, they are so-called command events, which originate from a variety of simple controls and correspond to the wxCommandEvent mother class; by default only these command events are set to propagate, i.e. only them will be transmitted to the event handler of the parent widget if the current one does not process them by itself ("did not connect to them")
    • by default, for a given type of event, when defining one's event handler (typically when connecting a process to the emitter of such events), this event will not be propagated anymore, possibly preventing the built-in wx event handlers to operate (e.g. to properly manage resizings); to restore their activation, skip (to be understood here as "propagate event" - however counter-intuitive it may seem) shall be set to true (either by passing a corresponding option when connecting, or by calling wxEvent:skip/2 with skip set to true from one's event handler) [12]
    • when a process connects to the emitter of a given type of events (e.g. close_window for a given frame), this process is to receive corresponding messages and then perform any kind of operation; however these operations cannot be synchronous (they are non-blocking: the process does not send to anyone any notification that it finished handling that event), and thus, if skip is true (that is, if event propagation is enabled), any other associated event handler(s) will be triggered concurrently to the processing of these event messages; this may be a problem for example if a controller listens to the close_window event emitted by a main frame in order to perform a proper termination: the basic, built-in event handlers will then by default be triggered whereas the controller teardown may be still in progress, and this may result in errors (e.g. OpenGL ones, like {{{badarg,"This"},{wxGLCanvas,swapBuffers,1}},... because the built-in close handlers already deallocated the OpenGL context); a proper design is to ensure that skip remains set to false so that propagation of such events is disabled in these cases: then only the user code is in charge of closing the application, at its own pace [13]
[12]MyriadGUI took a different convention: whether an event will propagate by default depends on its type, knowing that most of the types are to propagate. Yet the user can override these default behaviours, by specifying either the trap_event or the propagate_event subscription option, or by calling either the trap_event/1 or the propagate_event/1 function.
[13]This is why MyriadGUI applies per-event type defaults, thus possibly trapping events; in this case, if the built-in backend mechanisms would still be of use, they can be triggered by calling the propagate_event/1 function from the user-defined handler, only once all its prerequisite operations have been performed (this is thus a way of restoring sequential operations).
  • in terms of sizing, the dimensions of a parent widget prevail: its child widgets have to adapt (typically with sizers); if wanting instead that the size of a child dictates the one of its parent, the size of the client area of the parent should be set to the best size of its child, or its fit/1 method shall be called

Extra information resources about wx (besides the documentation of its modules):

Installing rebar3

One may use our install-rebar3.sh script for that (installed from sources, or prebuilt).

Micro-Cheat Sheet

To avoid having to perform a lookup in the documentation:

(no need for the erlang module to be explicitly specified for the first two functions, as both are auto-imported)

case BOOLEAN_EXPR of

    true ->
        DO_SOMETHING;

    false ->
        ok

end
[14]

Note that it is not the case of the and and or operators, whose precedence is higher than, notably, the comparison operators.

For example a clause defined as f(I) when is_integer(I) and I >= 0 -> ... will never be triggered as it is interpreted as f(I) when (is_integer(I) and I) >= 0 -> ..., and the and guard will always fail as I is an integer here, not a boolean. So such a clause should be defined as the (correct) f(I) when is_integer(I) andalso I >= 0 -> ... instead.

with the equivalent (provided BOOLEAN_EXPR evaluates to either true or false - otherwise a bad argument exception will be thrown) yet shorter: BOOLEAN_EXPR andalso DO_SOMETHING; for example: [...], OSName =:= linux andalso fix_for_linux(), [...]

Similarly, orelse can be used to evaluate a target expression iff a boolean expression is false, to replace a longer expression like:

case BOOLEAN_EXPR of

    true ->
        ok;

    false ->
        DO_SOMETHING;

end

with: BOOLEAN_EXPR orelse DO_SOMETHING; for example: [...], file_utils:exists("/etc/passwd") orelse throw(no_password_file), [...]

In both andalso / orelse cases, the DO_SOMETHING branch may be a single expression, or a sequence thereof (i.e. a body), in which case a begin/end block may be of use, like in:

file_utils:exists("/etc/passwd") orelse
  begin
      trace_utils:notice("No /etc/password found."),
      throw(no_password_file)
  end

Similarly, taking into account the aforementioned precedences, Count =:= ExpectedCount orelse throw({invalid_count, Count}) will perform the expected check.

Be wary of not having precedences wrong, lest bugs are introduced like the one in:

MaybeListenerPid =:= undefined orelse
   MaybeListenerPid ! {onDeserialisation, [self(), FinalUserData]}

(orelse having more priority than !, parentheses shall be added, otherwise, if having a PID, the message will actually be sent to any process that would be registered as true)

Some of these elements have been adapted from the Wings3D coding guidelines.

Erlang Resources