"Running your FreedomBox over Tor" - DebConf19 talk

I've stumbled upon this interesting talk by Nathan Willis about FreedomBox and the Tor network. If you've never heard of them, FreedomBox is a community-developed private server system to host web services on your own computer. Tor is the renowned onion routing implementation that aims to improve anonymity when browsing the web.

The speaker describes his personal experience installing and running a FreedomBox installation that is only accessible over Tor. I tried to summarize some points I found personally interesting.

Hidden .onion service configuration

FreedomBox, via its Web UI named Plinth, lets the users configure and start hidden .onion services. You can find this option in the "Anonymity Network" module. By enabling it, the .onion service will cover any web service that runs from a subdirectory under Plinth.

It may not always work, though: If the application doesn't "speak" HTTP, uses a different port or assumes to be accessible at its own (sub)domain - foo.example.com is fine, example.com/foo is not -, Nathan suggests to create your own hidden services for each application: check out 11:54 to understand the right commands.

Routing non-web application over Tor

Tor offers torify, a wrapper around torsocks, that lets you proxy the TCP traffic of a given application via SOCKS5 protocol - no UDP though. It is helpful for applications like IRC bouncers, provided that they support the SOCKS5 protocol. At 24:18, Nathan describes how the issues he had trying to "torify" Radicale, a CalDAV application, and some IRC bouncers.

Mobile access

Nathan also describes some issues with using Android applications to access his self-hosted applications over Tor. Tor Browser on Android works in the same way of its desktop parent, so at least he can access the web applications running on his hardware. To proxy the traffic of native Android applications, you can use Orbot - it doesn't always work, though. Nathan also explains some examples of "mobile madness" he found when configuring mobile applications for TT-RSS and Radicale.


So, I hope these notes encouraged you to check out the talk! Let me know what you think over Twitter or email .

Some thoughts about "The Great Divide"

I've read "The Great Divide", and it resonated with me. I also feel an evergrowing divide between Javascript engineers (people with a skillset revolving around Javascript) and UX engineers (people who are more interested in HTML, CSS, styling, accessibility and design).

CSS Tricks is a blog whose main audience is designers and UX engineers, so the article talks a lot about the second group, and references other blog posts from that "faction". Hovewer I agree: I am firmly in the first group, but I feel I should apply to Javascript Engineer positions, not to frontend developer positions.

Let's go back to the article. The main points of the article are: * There is a divide between these two skillsets, and both are getting bigger and more complex * People cannot feasibly learn both, so the term frontend developer should be replaced with more specific role terms * In the same way fullstack developers are not really fullstack, companies looking for frontend developers are not finding what they're looking for.

The divide between the two skillset is growing, and growing fast. As websites are getting more complex and become web applications, using the browser as their own platform, engineers created their own tools to overcome and keep this complexity at manageable levels. However, users are also mobile, and access the same applications from different devices: UX engineers must now create responsive designs, and use the new CSS tecniques and tools that browser developers create to help them.

People can't feasibly learn everything, but the industry expects them to: project managers and recruiters offer job offers for a generic role of frontend developer, where frontend is just the client-facing part of the application. It may have been an OK definition some time ago, but times have changed and full-blown applications are now there, in the client-facing part. New roles should be requested instead: Javascript engineers and UX engineers. People are already specializing, but the industry still looks for people that can do both, so they can be used interchangeably. Unfortunately, such people are quite rare.

The industry is doing the same error with the so-called full-stack developers. Brad Frost says that he translates that word to programmers who can do frontend code because they have to and it's easy. The reality is that it's getting harder and harder, and some people are better at either Javascript or HTML/CSS. There is this trend, among clients and project managers, to consider frontend development as easy because it's more artsy and cute. This attitude may make them think they can just allocate whatever junior dev/intern they can find because it's "easy" and "juniors should do the easy stuff". It is a wrong attitude -if not even a dangerous one!-, for the wrong reasons.

The article also talks about job descriptions, and how they should be more precise (no, not "vulnerable". please. let's not use this kind of emotional words where we don't need them). I want to add my own experience. Look at this list of requirements, coming from a real job offer on Linkedin from 2 years ago:

  • Passion and Experience in building large scale web applications
  • Expert knowledge of Javascript
  • Ideally knowledge of ReactJS
  • Knowledge of Angular / VueJS also useful
  • Experience in automation with Gulp

Now, I want to wonder the damn why the take-home was about "creating a small page starting from a reference screenshot". That's the other side of the problem. I can't write CSS for the life of me. Obviously, I tanked the interview. It's my fault, we agree, but why am I expected to write CSS when the job description does not even mention CSS?

What do you think? Let me know on Twitter, send your opinion to me via email_ or consider supporting me on Ko-fi.

Exploring a Linux-based TV box

/images/woxter-bt-download.jpg

Hello! So... a family member brought this media center home, and asked me to repurpose it to a NAS. Here I'm going to share some notes about this media center and how I repurposed it as a NAS.

What media center are we talking about?

This media center is a Woxter i-Cube 2400 [spanish]. Yes, it is that old. It works just fine, though: it can reproduce images, MP3s and videos. Please re-encode your videos if they use a recent codec (such as h264, used by Youtube videos). I didn't try Matroska videos (mkv) - if they don't work, use ffmpeg or autohardsubber to create a single .mp4 file. I was amazed to see that Japanese characters were not replaced with squares, points or some strange A - it renders unicode characters just fine.

As any respectable media center, it has a replaceable internal 3.5" hard drive, and can access USB thumb drives. It can also access remote folders via SMB, if your Windows computers in the same network expose shared folders. Please note that the internal hard drive partitions must use a FAT32 filesystem, or it won't be recognized.

Remote controller configuration

This media center was provided as-is, without its remote control. Fortunately, we found an universal remote controller manufactured by SilverCrest: the codes are 6138 and AUX2. I don't really know how universal remote controllers work, so you may have to find your own codes online.

Shell inside

To access the remote shell, just telnet into the media center (default port) using the root user - no password needed. Once we log in, we can discover that this media center runs an old version of busybox, with its shell "ash".

> uname -a
Linux NAS 2.6.12.6-VENUS #4 Wed Oct 21 15:10:26 CST 2009 mips unknown

It does not have a lot of utilities: for example, user configuration applets (useradd, usermod, etc) are not present here. By default, it runs an HTTP server, a bittorrent downloader and the media center GUI.

I'd have liked to modify some configuration files, such as inetd.conf, but the root (/) partition is squashfs, so remounting it as read-write is not possible. Fortunately for us, we don't really need it - I'll explain why in a later section.

A HTTP server in my media center?!?

To be honest, the first thing I tried was to check if the thing had an administration panel to manage it remotely, even without a remote controller. I wasn't that far from the truth: it launches the busybox httpd server at boot, and you may create CGI webservices. I'm not sure you really want to handle POST requests in bash, though. I suppose I may try to cross-compile some C or Rust code there, but I'm too busy to try that right now. If you want to try, all CGI webservices are stored in /var/www/cgi-bin.

There is one very important CGI service that the media center offers: a webpage to load torrent files, start and stop the download and delete everything from the server. It uses btpd, a small (and outdated) bittorrent downloader daemon. It works, but I'm not really sure how secure it may be - not much, I suppose. You can use that webpage by accessing http://192.168.1.176/cgi-bin/webtorrent.cgi

Samba configuration

Let's gather to work on the best part of the post: samba configuration. Busybox has a samba daemon, I'm not sure if the configuration file was already present or not, but I can tell you it won't start at boot.

There are two servers to be launched at boot time: smbd and nmbd. The first one implements the SMB protocol - file listing, creating directories, and so on. the second one is a NetBIOS server (port 139) - if this server does not run, the media center will not be shown in the "Network" tab of the Windows Explorer.

To launch smbd and nmbd, I had to modify a service file: /usr/local/etc/rcS.

# add these lines to /usr/local/etc/rcS

# launch the samba daemon
smbd --configfile=/usr/local/etc/smb.conf
# start the netbios daemon in a different script
ash /usr/local/etc/launch_netbios.sh &

Beware: you need to launch nmbd after that the IP is set, or it won't work. To ensure that, I launch nmbd after 30 seconds. It comes from empirical measures, so you may need to sleep longer. nmbd is not necessary, though: you can access the NAS directly via its IP (look it up in your router's administration panel). smbd can be launched at any time.

# launch_netbios.sh

sleep 30
nmbd --configfile=/usr/local/etc/smb.conf

As said before, you cannot create new users: it means that the only usable user is root. It also affects the configuration of our Samba server: all folders must be accessed as guest. An important utility, smbpasswd, is also missing, so we cannot even set passwords for samba users.

The commented configuration follows:

[global]
  hosts allow = 192.168.1.

  # don't load printers, obviously
  load printers = no
  disable spoolss = yes

  # standalone file server, consider unknown users as guests
  # and use `root` as the default guest account.
  # in the documentation, the default guest account is `ftp` - it cannot access any folder here
  security = user
  map to guest = Bad User
  guest account = root

  # transfer speed tuning
  use sendfile = yes
  read raw = yes
  write raw = yes
  dns proxy = no

  # set the workgroup and the name of server
  workgroup = WORKGROUP
  server string = NAS

[HDD1]
  comment = HDD Partition 1
  path = /usr/local/etc/hdd/volumes/HDD1
  hide dot files=yes
  hide files=/.*/lost+found
  force create mode=0775
  force directory mode=0775

  # enable guest access to this folder
  guest ok = yes
  # the guest user can create, delete and move files
  read only = no

Do you want to add something, or point out some errors? Send me an email at winter@wintermade.it. If you liked it, why don't you buy me a Ko-fi ?

"Drawing Git Graphs with Graphviz and Org-Mode"

Drawing Git Graphs with Graphviz and Org-Mode - correl.phoenixinquis.net

Today I needed something like TortoiseGit's "Revision Graphs": a simple graph that shows tags and branches of a git repository in topological order.

While searching on the net, I found this cool blog post about generating this kind of graphs with elisp and graphviz. Even though the author uses Lisp, the code is very simple and approachable, and it can be easily translated to Python or other languages.

How to write generic dissectors in Wireshark

Wireshark is a flexible network analyzer, that can also be extended via plugins or dissectors.

A dissector is a kind of plugin that lets Wireshark understand a protocol - in our case, a protocol that is only used by a certain application. There are several reasons to create your own (application-level) protocol over UDP/IP or TCP/IP, such as efficiency (by sending only binary data, formatted in a certain application-specific format).

Wireshark is a very helpful tool during system integration tests, or while developing a networked application. A dissector helps developers and testers check if the applications under test are sending (or receiving) data correctly - if the structure of a certain message is as defined by the protocol, if some fields have invalid values, if an application is sending more (or fewer) messages than expected in a certain timeframe.

WireShark Generic Dissectors - a declarative approach

Wireshark Generic Dissectors (WSGD) is a plugin that lets you define a dissector for your custom protocol, in a declarative manner.

Being declarative is a cool idea - by just saying what the protocol looks like, the content of the dissector is clear to a technical, but non-developer, user. Such protocol descriptions can also be used as documentation, without having to manage different Wireshark API versions (as it may happen with Lua-based dissectors). It's not all fun and games though: this plugin has some (reasonable) limitations, such as not managing text protocols, or requiring an header common to every kind of message described in the protocol.

Let's write a generic dissector

Let's start with the Wireshark Generic Dissector file: it contains some metadata about the protocol. These metadata, consisting of details such as the protocol name, the structure that sketches the header of all messages in the protocol and the main message, are necessary to be efficient when parsing the messages during the capture.

# file custom.wsgd

# protocol metadata
PROTONAME Custom Protocol over UDP
PROTOSHORTNAME Custom
PROTOABBREV custom

# conditions on which the dissector is applied:
# the protocol will be applied on all UDP messages with port = 8756
PARENT_SUBFIELD udp.port
PARENT_SUBFIELD_VALUES 8756

# the name of the header structure
MSG_HEADER_TYPE                    T_custom_header
# field which permits to identify the message type.
MSG_ID_FIELD_NAME                  msg_id
# the main message type - usually it is a fake message, built of one
#    of the possible messages
MSG_MAIN_TYPE                      T_custom_switch(msg_id)

# this token marks the end of the protocol description
PROTO_TYPE_DEFINITIONS

# refer to the description of the data format
include custom.fdesc;

The second file is the data format description: it described the messages of the protocol we're writing a dissector for.

# file custom.fdesc

# here, we define an enumerated type to list the type of messages
#   defined in our protocol
enum8 T_custom_msg_type
{
    word_message   0
    number_message 1
}

# here, we define the structure of the header.
# The header (the same for each message type) must...
struct T_custom_header
{
    # ... define the order of the data
    byte_order big_endian;
    uint32 counter;
    uint8  size_after_header;
    # ... contain the field defined as MSG_ID_FIELD_NAME
    T_custom_msg_type msg_id;
}

struct T_word_message
{
    T_custom_header header;
    uint8           word_len;
    # array of characters
    char[word_len]  word;
    # "word" messages will always have some unused trailing bytes:
    #   they can be marked as raw(*) - the size is calculated at runtime
    raw(*)          spare;
}

struct T_number_message
{
    T_custom_header header;
    uint8           number;
    bool8           is_even;
}

# T_custom_switch is the main message (as defined in the protocol description)
# according to the parameter msg_id (of type T_custom_msg_type), we define
# the main message to be defined by a single message: either T_word_message or T_number_message.
switch T_custom_switch T_custom_msg_type
{
case T_custom_msg_type::word_message:   T_word_message "";
case T_custom_msg_type::number_message: T_number_message "";
}

Generating some network traffic...

To verify that the dissector we've written is correct, we are going to build a small client to send some UDP messages to a very simple server.

Let's start with the server: it just receives UDP messages on port 8756, and prints the contents of those messages.

import socketserver

class CustomHandler(socketserver.DatagramRequestHandler):
    def handle(self):
        data = self.request[0].strip()
        print(data)

if __name__ == "__main__":
    serv = socketserver.UDPServer(("127.0.0.1", 8756), CustomHandler)
    serv.serve_forever()

The client sends some data to our server - we just need it to generate some traffic to sniff on Wireshark.

import socket
import struct
import random
import string
import time

HOST, PORT = "localhost", 8756

# SOCK_DGRAM is the socket type to use for UDP sockets
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)

# refer to `pydoc struct`
HEADER_STRUCT = "".join([
    ">",  # network byte order
    "L",  # counter
    "B",  # message size
    "B",  # message type (0: word, 1: number)
])

PAYLOAD_WORD_TYPE = HEADER_STRUCT + "".join([
    "B",    # word length
    "100s", # string (at most 100 characters)
])
word_struct = struct.Struct(PAYLOAD_WORD_TYPE)

PAYLOAD_NUMBER_TYPE = HEADER_STRUCT + "".join([
    "B",  # number
    "B",  # 0: even, 1: odd
])
number_struct = struct.Struct(PAYLOAD_NUMBER_TYPE)

msg_counter = 0
while True:
    msg_counter += 1

    # prepare data to send
    if random.random() < 0.70:
        num = random.choice(range(256))
        is_even = num & 1
        data = number_struct.pack(msg_counter, 2, 1, num, is_even)
    else:
        string_len = random.choice(range(100))
        the_string = bytes("".join(random.choice(string.ascii_letters+" ") for i in range(string_len)), "ascii")
        data = word_struct.pack(msg_counter, 101, 0, string_len, the_string)

    # send the message
    sock.sendto(data, (HOST, PORT))

    # wait 200ms
    time.sleep(0.2)

Set it up

Wireshark Generic Dissector is a binary plugin, distributed as a .so file - please read the installation procedure. I've summarized what I did to install the plugin and the files we've written so far:

# download the plugin - be sure it's the right one for
# the version of Wireshark installed on your system
wget http://wsgd.free.fr/300X/generic.so.ubuntu.64.300X.tar.gz
# extract the file generic.so
unzip ./generic.so.ubuntu.64.300X.tar.gz
# install the shared object globally by putting in the right folder
sudo cp generic.so  /usr/lib/wireshark/plugins/3.0/epan
# install the dissector files in the right folder - the same of the shared object
sudo cp custom.wsgd /usr/lib/wireshark/plugins/3.0/epan
sudo cp custom.fdesc /usr/lib/wireshark/plugins/3.0/epan

Test drive

/images/wireshark-wsgd-with-dissector.png

As we can see by the screenshot, we are now able to see the content of the messages our application is sending to the server, without writing a single line of code (other than our application, obviously).

References


If you feel that this article helped you, feel free to share it! If you have questions, ask on Twitter, or offer me a coffee to let me keep writing these notes!