Value Struct - wrapping values for strong typing

One of my favorite tricks of C++ is this one simple compile-time device: value structs. This trick consists of a struct that wraps its only field, which type is a primitive one (int, float, ...).

Everything seems the same...

Let's start with a very simple example: converting between radians and degrees.

#include <iostream>

float to_rad(float deg) {
    return deg * 3.14 / 180;
}

float to_deg(float rad) {
    return rad * 180 / 3.14;
}

You may wonder: what's wrong with this code? Nothing, the code is fine. The problem is... nothing can stop you from using a value representing a "degree" to be passed to a function that takes a "radiant".

int main() {

    float half_pi = 1.57;

    // `half_pi` is 90 degrees in radians
    float ninety_deg = to_deg(half_pi);
    std::cout << ninety_deg << std::endl;

    // 90... degrees or radians? the compiler doesn't care!
    float what = to_deg(ninety_deg);
    std::cout << what << std::endl;

    return 0;
}

Unfortunately, this issue is more widespread than you think, and I've seen it happen several times.

A variant of this problem shows up when a function fun(int caller_id, int message_id) has two parameters with the same type, and someone swaps them (fun(int message_id, int caller_id)). If the client code is not updated to reflect this change, the program may not work correctly anymore. The compiler, however, will not warn about this change!

Let's go back to the original problem: what if to_deg and to_rad accept and return a special type, instead of a raw float?

Wrapping values

Let's create two ad-hoc structures for our Degrees and Radians. Both structures will wrap a single float, that can be accessed directly.

struct Degrees {
    float value;
    Degrees(float v): value(v){}
};

struct Radians {
    float value;
    Radians(float v): value(v){}
};

These two structs look exactly the same, but they are different types, and the compiler will treat them as such. No mixing allowed, this time!

Radians to_rad(Degrees d){
    return d.value * 180 / 3.14;
}

Degrees to_deg(Radians r) {
    return r.value * 3.14 / 180;
}

int main() {
    Radians half_pi = Radians(1.57);

    Degrees ninety_degrees = to_deg(half_pi);
    std::cout << ninety_degrees.value << std::endl;

    // Degrees what = to_deg(ninety_degrees);       // COMPILE ERROR!
    // std::cout << what.value << std::endl;

    return 0;
}

Under the hood, nothing has changed

You may now think: "wait, are we going to create a lot of temporary objects? It's going to be slow!". Well, gcc and clang recognize this pattern, and will replace Degrees and Radians with the float value - as if those structures were never defined.

Let's compare the generated assembly code, generated by gcc 9.1.0 (compile flags: -O2). In this listing, we are going to examine the difference between the first, float-based version (left side) and the struct-based version (right side).

Note: section .LFE1544 and section .LFE1551 define to_deg(float) and to_deg(Radians). section .LFB1545 and section .LF1552 contain the code that is executed - the "real code"

.LFE1544:                                       |       .LFE1551:
        .size   _Z6to_radf, .-_Z6to_radf        |               .size   _Z6to_rad7Degrees, .-_Z6to_rad7Degrees
        .p2align 4                                              .p2align 4
        .globl  _Z6to_degf                      |               .globl  _Z6to_deg7Radians
        .type   _Z6to_degf, @function           |               .type   _Z6to_deg7Radians, @function
_Z6to_degf:                                     |       _Z6to_deg7Radians:
.LFB1545:                                       |       .LFB1552:
        .cfi_startproc                                          .cfi_startproc
        mulss   .LC2(%rip), %xmm0                               mulss   .LC2(%rip), %xmm0
        cvtss2sd        %xmm0, %xmm0                            cvtss2sd        %xmm0, %xmm0
        divsd   .LC0(%rip), %xmm0                               divsd   .LC0(%rip), %xmm0
        cvtsd2ss        %xmm0, %xmm0                            cvtsd2ss        %xmm0, %xmm0
        ret                                                     ret
        .cfi_endproc                                            .cfi_endproc

As we can see, the difference is just in the description of the function: the real code is the same. Both versions perform the same multiplications and divisions, and read the same registers.

Conclusion

In this blog post, we looked at value structs, that are structures that wrap a single field, and how we can use them to type-check our code and avoid mixing raw values. We also looked the assembly code, and noticed that the wrappers we introduced do not affect the performance of our code: they are compiled as if we just used the raw values.

If you feel that this article helped you, feel free to share it! If you have questions, ask on Twitter, or offer me a coffee to let me keep writing these notes!

References:

"What nobody tells you about documentation"

What nobody tells you about documentation - Divio.com

This link from Hacker News' frontpage catched my attention. Despite the clickbaity title, it is a thorough blog post about documentation, its role and how to write documentation users will actually enjoy reading.

To summarize, Daniele (the author of the blog post) defines four kinds of documentation: tutorials, how-to guides, explanations and reference documentation. They also explain the scope and the role of each kind, and do/don't rules for writing an effective tutorial, how-to guide, et cetera.

I am not going to summarize that great article: just go read it. Recommended!

"Time, Clocks and the Ordering of Events in a Distributed System"

In a distributed system, understanding whether an event happened before another event is a difficult task, but it's necessary to better understand how the system is behaving: what event caused another event?

One solution was proposed by Lamport in his paper Time, Clocks and the Ordering of Events in a Distributed System: a clock that is updated only via messages sent inside the system, without using external sources (eg physical time).

What is a distributed system?

A distributed system consists of a collection of distinct processes which are spatially separated, and which communicate with one another by exchanging messages. The paper, while acknowledging that the remarks of this paper are more general, considers a system as distributed if the message transmission delay is not negligible compared to the time between events in a single process.

Events can be anything that can be considered important in the system, for example running a certain subroutine, or sending/receiving messages from other processes.

Why do we need to know the order of events?

The knowledge of causal precedence relation among the events of processes helps solve a variety of problems in distributed systems, such as distributed algorithms design, tracking of dependent events, knowledge about the progress of a computation, and concurrency measures. For example, CockroachDB is using a variant of Lamport's timestamps (the logical clocks explained in this article) to order database transactions. In this very article, Lamport shows how his timestamps let him solve a distributed variant of the mutual exclusion problem - we will not discuss it in this blog post, though.

About time

From the first paragraph of the article:

The concept of the temporal ordering of events pervades our thinking about systems. [...] However, we will see that this concept must be carefully reexamined when considering events in a distributed system.

The first observation is: in distributed systems, we cannot use our intuition about time to decide if something happened before something else, or if some action can be accepted by the system. There is no "now" in distributed systems, especially in the geographically distributed ones.

In a distributed system, it is sometimes impossible to say that one of two events occurred first. We cannot say that something happened in a specific moment, because what your "specific moment" and my "specific moment" may not be the same. We can only say that "happened before" is a partial ordering of the events in the system.

How can we fix, or at least work around, this uncertainty? Let's use something else in place of physical clocks: logical clocks. The rest of the article explains what is it, and the rules behind them.

Defining "Happened before"

It's now time to define the "happened before" relation.

The relation "happened before" on the set of events of a system is defined as the smallest relation satisfying the following conditions:

  1. If a and b are events in the same process, and a comes before b, then a → b
  2. If a is the sending of a message by one process and b is the receipt of the same message by another process, then a → b
  3. If a → b and b → c, then a → c. Events are said to be concurrent if both a → b and b → a are false.

Assuming that events cannot happen before themselves, the relation is an irreflexive1 partial ordering2 on the set of all events in the system.

Giving a number to an event

We now have a relation between events that defines if one of two events happened before the other one. In order to use it, we need to assign a number to an event - let's call this function Clock. Each process Pi has its own Clock Ci. The entire systems of Clocks is represented by the function C, which assigns to any event b the number C(b), where C(b) = Cj(b) if b is an event in the process Pj.

The C function (clock of the system) has to respect the Clock Condition.

Clock Condition:

for any event a, b, if a → b then C(a) < C(b)

We cannot expect the converse condition to hold as well, since that would imply that any two concurrent events must occur at the same time.

From our definition of happened before, the Clock Condition is satisfied if the following two conditions hold:

  • C1: If a and b are events in process Pi, and a comes before b, then Ci(a) < Ci(b)
  • C2: If a is the sending of a message by process Pi and b is the receipt of that message by process Pj, then Ci(a) < Cj(b)

The first condition, C1, helps us order two events in the same process: if one happened before the other, then the event that happened before must be assigned a lower number. The second condition, C2, tells us how we should handle the communication between processes. Sending or receiving a message is an event too: sending a message must have a lower number than the receipt of the same message -can you receive a message before it is sent?

To guarantee that the system of clocks satisfies the Clock Condition, we need two implementation rules:

  • IR1: Each process Pi increments Ci between any two successive events (the increment itself is not an event!)
  • IR2 (a): If event a is the sending of a message m by process Pi, then then message m contains a timestamp Tm = Ci(a)
  • IR2 (b): Upon receiving a message m, process Pj sets Cj greater than or equal to its present value and greater than Tm

IR1 insures that the clock is updated by every process (and C1 is satisfied). IR2 insures that C2 is satisfied by describing the value that should be associated to each message and how receiving processes should handle the timestamp in received messages.

From a partial to a total ordering

The Clock function described in the previous paragraph lets us order the events in a partial order. Why is it a problem? It may be a problem because events in different processes may have the same number - which one happened before?

To break ties, we can use any arbitrary total ordering of the processes. It let us define a new happened before relation, described by the symbol .

happened before :

If a is an event in process Pi and b is an event, then a ⇒ b if and only if either

  1. Ci(a) < Cj(b)
  2. Ci(a) = Cj(b) and Pi ≺ Pj

Drawbacks of Lamport's timestamps

So... we just solved all of our problems, right? Unfortunately, no. This kind of logical clocks have some problems. Let's look at the first one: a → b ⟹ C(a) < C(b), but the converse is not true! We cannot use the clock values to order events! We will discuss about a solution of this, named Vector Clock, in a different post.

Anomalous Behavior, aka "out-of-band messages mess up with the ordering!"

There is a different problem, too. The new happened before relation does not protect us from anomalous behavior, if the ordering obtained by this algorithm differs from that perceived by the user.

Lamport asks to imagine two friends, A and B, using the same distributed computer system. Let's suppose they perform the following steps:

  1. Friend A issues a request a on their computer
  2. Friend A telephones Friend B, telling them to issue a new request
  3. Friend B issues a request b on their computer (different from Friend A's)

Our system may order the request named b before request a, and it's not even wrong! The problem, in this thought experiment, is that the phone call is not an event we recorded in our system, so we did not assign it a number, so we cannot use it to order the two requests.

Formally, we have two sets of events. L is the set of all system events. Unfortunately, the phone call is not in L, but in L, the set of events which contains the events in L together with all other relevant external events. Let then ⮕ denote an "happened before" relation on L. We can see that a → b is false, but ab is true!

It is impossible to avoid anomalous behavior when the event ordering system we set up only uses events in L, but doesn't use the ones in L. What can we do, then? We have two possibilities: either we explicitely introduce into the system the necessary information about the ordering ⮕, or we construct a system of clocks that satisfy a stronger condition.

In the first case, we give the user the responsibility for avoiding anomalous behavior - in this case, at step 2 Friend B should have asked the timestamp Ta of request a, and specify that the request b should be given a timestamp later than Ta.

We will explore the second possibility in a later post, exploring we can define a stronger clock condition and how we can use physical clocks to build a clock function that satisfies it.


Did you like this article? Did you find an error? Don't hesitate and let me know! Contact me on Twitter, or send me an email. Subscribe to the RSS feed to read the next article! If you want to read more articles like this one, consider offering me a Ko-Fi.


  1. A relation is "irreflexive" if it does not relate any element to itself - an example is "greater than" on the real numbers: a real number cannot be greater than itself↩︎

  2. A "partial ordering" is a relation that defines an order between some, not all, of the elements in the set the relation is defined over.↩︎

"Running your FreedomBox over Tor" - DebConf19 talk

I've stumbled upon this interesting talk by Nathan Willis about FreedomBox and the Tor network. If you've never heard of them, FreedomBox is a community-developed private server system to host web services on your own computer. Tor is the renowned onion routing implementation that aims to improve anonymity when browsing the web.

The speaker describes his personal experience installing and running a FreedomBox installation that is only accessible over Tor. I tried to summarize some points I found personally interesting.

Hidden .onion service configuration

FreedomBox, via its Web UI named Plinth, lets the users configure and start hidden .onion services. You can find this option in the "Anonymity Network" module. By enabling it, the .onion service will cover any web service that runs from a subdirectory under Plinth.

It may not always work, though: If the application doesn't "speak" HTTP, uses a different port or assumes to be accessible at its own (sub)domain - foo.example.com is fine, example.com/foo is not -, Nathan suggests to create your own hidden services for each application: check out 11:54 to understand the right commands.

Routing non-web application over Tor

Tor offers torify, a wrapper around torsocks, that lets you proxy the TCP traffic of a given application via SOCKS5 protocol - no UDP though. It is helpful for applications like IRC bouncers, provided that they support the SOCKS5 protocol. At 24:18, Nathan describes how the issues he had trying to "torify" Radicale, a CalDAV application, and some IRC bouncers.

Mobile access

Nathan also describes some issues with using Android applications to access his self-hosted applications over Tor. Tor Browser on Android works in the same way of its desktop parent, so at least he can access the web applications running on his hardware. To proxy the traffic of native Android applications, you can use Orbot - it doesn't always work, though. Nathan also explains some examples of "mobile madness" he found when configuring mobile applications for TT-RSS and Radicale.


So, I hope these notes encouraged you to check out the talk! Let me know what you think over Twitter or email .

Some thoughts about "The Great Divide"

I've read "The Great Divide", and it resonated with me. I also feel an evergrowing divide between Javascript engineers (people with a skillset revolving around Javascript) and UX engineers (people who are more interested in HTML, CSS, styling, accessibility and design).

CSS Tricks is a blog whose main audience is designers and UX engineers, so the article talks a lot about the second group, and references other blog posts from that "faction". Hovewer I agree: I am firmly in the first group, but I feel I should apply to Javascript Engineer positions, not to frontend developer positions.

Let's go back to the article. The main points of the article are: * There is a divide between these two skillsets, and both are getting bigger and more complex * People cannot feasibly learn both, so the term frontend developer should be replaced with more specific role terms * In the same way fullstack developers are not really fullstack, companies looking for frontend developers are not finding what they're looking for.

The divide between the two skillset is growing, and growing fast. As websites are getting more complex and become web applications, using the browser as their own platform, engineers created their own tools to overcome and keep this complexity at manageable levels. However, users are also mobile, and access the same applications from different devices: UX engineers must now create responsive designs, and use the new CSS tecniques and tools that browser developers create to help them.

People can't feasibly learn everything, but the industry expects them to: project managers and recruiters offer job offers for a generic role of frontend developer, where frontend is just the client-facing part of the application. It may have been an OK definition some time ago, but times have changed and full-blown applications are now there, in the client-facing part. New roles should be requested instead: Javascript engineers and UX engineers. People are already specializing, but the industry still looks for people that can do both, so they can be used interchangeably. Unfortunately, such people are quite rare.

The industry is doing the same error with the so-called full-stack developers. Brad Frost says that he translates that word to programmers who can do frontend code because they have to and it's easy. The reality is that it's getting harder and harder, and some people are better at either Javascript or HTML/CSS. There is this trend, among clients and project managers, to consider frontend development as easy because it's more artsy and cute. This attitude may make them think they can just allocate whatever junior dev/intern they can find because it's "easy" and "juniors should do the easy stuff". It is a wrong attitude -if not even a dangerous one!-, for the wrong reasons.

The article also talks about job descriptions, and how they should be more precise (no, not "vulnerable". please. let's not use this kind of emotional words where we don't need them). I want to add my own experience. Look at this list of requirements, coming from a real job offer on Linkedin from 2 years ago:

  • Passion and Experience in building large scale web applications
  • Expert knowledge of Javascript
  • Ideally knowledge of ReactJS
  • Knowledge of Angular / VueJS also useful
  • Experience in automation with Gulp

Now, I want to wonder the damn why the take-home was about "creating a small page starting from a reference screenshot". That's the other side of the problem. I can't write CSS for the life of me. Obviously, I tanked the interview. It's my fault, we agree, but why am I expected to write CSS when the job description does not even mention CSS?

What do you think? Let me know on Twitter, send your opinion to me via email_ or consider supporting me on Ko-fi.