Exploring a Linux-based TV box

/images/woxter-bt-download.jpg

Hello! So... a family member brought this media center home, and asked me to repurpose it to a NAS. Here I'm going to share some notes about this media center and how I repurposed it as a NAS.

What media center are we talking about?

This media center is a Woxter i-Cube 2400 [spanish]. Yes, it is that old. It works just fine, though: it can reproduce images, MP3s and videos. Please re-encode your videos if they use a recent codec (such as h264, used by Youtube videos). I didn't try Matroska videos (mkv) - if they don't work, use ffmpeg or autohardsubber to create a single .mp4 file. I was amazed to see that Japanese characters were not replaced with squares, points or some strange A - it renders unicode characters just fine.

As any respectable media center, it has a replaceable internal 3.5" hard drive, and can access USB thumb drives. It can also access remote folders via SMB, if your Windows computers in the same network expose shared folders. Please note that the internal hard drive partitions must use a FAT32 filesystem, or it won't be recognized.

Remote controller configuration

This media center was provided as-is, without its remote control. Fortunately, we found an universal remote controller manufactured by SilverCrest: the codes are 6138 and AUX2. I don't really know how universal remote controllers work, so you may have to find your own codes online.

Shell inside

To access the remote shell, just telnet into the media center (default port) using the root user - no password needed. Once we log in, we can discover that this media center runs an old version of busybox, with its shell "ash".

> uname -a
Linux NAS 2.6.12.6-VENUS #4 Wed Oct 21 15:10:26 CST 2009 mips unknown

It does not have a lot of utilities: for example, user configuration applets (useradd, usermod, etc) are not present here. By default, it runs an HTTP server, a bittorrent downloader and the media center GUI.

I'd have liked to modify some configuration files, such as inetd.conf, but the root (/) partition is squashfs, so remounting it as read-write is not possible. Fortunately for us, we don't really need it - I'll explain why in a later section.

A HTTP server in my media center?!?

To be honest, the first thing I tried was to check if the thing had an administration panel to manage it remotely, even without a remote controller. I wasn't that far from the truth: it launches the busybox httpd server at boot, and you may create CGI webservices. I'm not sure you really want to handle POST requests in bash, though. I suppose I may try to cross-compile some C or Rust code there, but I'm too busy to try that right now. If you want to try, all CGI webservices are stored in /var/www/cgi-bin.

There is one very important CGI service that the media center offers: a webpage to load torrent files, start and stop the download and delete everything from the server. It uses btpd, a small (and outdated) bittorrent downloader daemon. It works, but I'm not really sure how secure it may be - not much, I suppose. You can use that webpage by accessing http://192.168.1.176/cgi-bin/webtorrent.cgi

Samba configuration

Let's gather to work on the best part of the post: samba configuration. Busybox has a samba daemon, I'm not sure if the configuration file was already present or not, but I can tell you it won't start at boot.

There are two servers to be launched at boot time: smbd and nmbd. The first one implements the SMB protocol - file listing, creating directories, and so on. the second one is a NetBIOS server (port 139) - if this server does not run, the media center will not be shown in the "Network" tab of the Windows Explorer.

To launch smbd and nmbd, I had to modify a service file: /usr/local/etc/rcS.

# add these lines to /usr/local/etc/rcS

# launch the samba daemon
smbd --configfile=/usr/local/etc/smb.conf
# start the netbios daemon in a different script
ash /usr/local/etc/launch_netbios.sh &

Beware: you need to launch nmbd after that the IP is set, or it won't work. To ensure that, I launch nmbd after 30 seconds. It comes from empirical measures, so you may need to sleep longer. nmbd is not necessary, though: you can access the NAS directly via its IP (look it up in your router's administration panel). smbd can be launched at any time.

# launch_netbios.sh

sleep 30
nmbd --configfile=/usr/local/etc/smb.conf

As said before, you cannot create new users: it means that the only usable user is root. It also affects the configuration of our Samba server: all folders must be accessed as guest. An important utility, smbpasswd, is also missing, so we cannot even set passwords for samba users.

The commented configuration follows:

[global]
  hosts allow = 192.168.1.

  # don't load printers, obviously
  load printers = no
  disable spoolss = yes

  # standalone file server, consider unknown users as guests
  # and use `root` as the default guest account.
  # in the documentation, the default guest account is `ftp` - it cannot access any folder here
  security = user
  map to guest = Bad User
  guest account = root

  # transfer speed tuning
  use sendfile = yes
  read raw = yes
  write raw = yes
  dns proxy = no

  # set the workgroup and the name of server
  workgroup = WORKGROUP
  server string = NAS

[HDD1]
  comment = HDD Partition 1
  path = /usr/local/etc/hdd/volumes/HDD1
  hide dot files=yes
  hide files=/.*/lost+found
  force create mode=0775
  force directory mode=0775

  # enable guest access to this folder
  guest ok = yes
  # the guest user can create, delete and move files
  read only = no

Do you want to add something, or point out some errors? Send me an email at winter@wintermade.it. If you liked it, why don't you buy me a Ko-fi ?

"Drawing Git Graphs with Graphviz and Org-Mode"

Drawing Git Graphs with Graphviz and Org-Mode - correl.phoenixinquis.net

Today I needed something like TortoiseGit's "Revision Graphs": a simple graph that shows tags and branches of a git repository in topological order.

While searching on the net, I found this cool blog post about generating this kind of graphs with elisp and graphviz. Even though the author uses Lisp, the code is very simple and approachable, and it can be easily translated to Python or other languages.

How to write generic dissectors in Wireshark

Wireshark is a flexible network analyzer, that can also be extended via plugins or dissectors.

A dissector is a kind of plugin that lets Wireshark understand a protocol - in our case, a protocol that is only used by a certain application. There are several reasons to create your own (application-level) protocol over UDP/IP or TCP/IP, such as efficiency (by sending only binary data, formatted in a certain application-specific format).

Wireshark is a very helpful tool during system integration tests, or while developing a networked application. A dissector helps developers and testers check if the applications under test are sending (or receiving) data correctly - if the structure of a certain message is as defined by the protocol, if some fields have invalid values, if an application is sending more (or fewer) messages than expected in a certain timeframe.

WireShark Generic Dissectors - a declarative approach

Wireshark Generic Dissectors (WSGD) is a plugin that lets you define a dissector for your custom protocol, in a declarative manner.

Being declarative is a cool idea - by just saying what the protocol looks like, the content of the dissector is clear to a technical, but non-developer, user. Such protocol descriptions can also be used as documentation, without having to manage different Wireshark API versions (as it may happen with Lua-based dissectors). It's not all fun and games though: this plugin has some (reasonable) limitations, such as not managing text protocols, or requiring an header common to every kind of message described in the protocol.

Let's write a generic dissector

Let's start with the Wireshark Generic Dissector file: it contains some metadata about the protocol. These metadata, consisting of details such as the protocol name, the structure that sketches the header of all messages in the protocol and the main message, are necessary to be efficient when parsing the messages during the capture.

# file custom.wsgd

# protocol metadata
PROTONAME Custom Protocol over UDP
PROTOSHORTNAME Custom
PROTOABBREV custom

# conditions on which the dissector is applied:
# the protocol will be applied on all UDP messages with port = 8756
PARENT_SUBFIELD udp.port
PARENT_SUBFIELD_VALUES 8756

# the name of the header structure
MSG_HEADER_TYPE                    T_custom_header
# field which permits to identify the message type.
MSG_ID_FIELD_NAME                  msg_id
# the main message type - usually it is a fake message, built of one
#    of the possible messages
MSG_MAIN_TYPE                      T_custom_switch(msg_id)

# this token marks the end of the protocol description
PROTO_TYPE_DEFINITIONS

# refer to the description of the data format
include custom.fdesc;

The second file is the data format description: it described the messages of the protocol we're writing a dissector for.

# file custom.fdesc

# here, we define an enumerated type to list the type of messages
#   defined in our protocol
enum8 T_custom_msg_type
{
    word_message   0
    number_message 1
}

# here, we define the structure of the header.
# The header (the same for each message type) must...
struct T_custom_header
{
    # ... define the order of the data
    byte_order big_endian;
    uint32 counter;
    uint8  size_after_header;
    # ... contain the field defined as MSG_ID_FIELD_NAME
    T_custom_msg_type msg_id;
}

struct T_word_message
{
    T_custom_header header;
    uint8           word_len;
    # array of characters
    char[word_len]  word;
    # "word" messages will always have some unused trailing bytes:
    #   they can be marked as raw(*) - the size is calculated at runtime
    raw(*)          spare;
}

struct T_number_message
{
    T_custom_header header;
    uint8           number;
    bool8           is_even;
}

# T_custom_switch is the main message (as defined in the protocol description)
# according to the parameter msg_id (of type T_custom_msg_type), we define
# the main message to be defined by a single message: either T_word_message or T_number_message.
switch T_custom_switch T_custom_msg_type
{
case T_custom_msg_type::word_message:   T_word_message "";
case T_custom_msg_type::number_message: T_number_message "";
}

Generating some network traffic...

To verify that the dissector we've written is correct, we are going to build a small client to send some UDP messages to a very simple server.

Let's start with the server: it just receives UDP messages on port 8756, and prints the contents of those messages.

import socketserver

class CustomHandler(socketserver.DatagramRequestHandler):
    def handle(self):
        data = self.request[0].strip()
        print(data)

if __name__ == "__main__":
    serv = socketserver.UDPServer(("127.0.0.1", 8756), CustomHandler)
    serv.serve_forever()

The client sends some data to our server - we just need it to generate some traffic to sniff on Wireshark.

import socket
import struct
import random
import string
import time

HOST, PORT = "localhost", 8756

# SOCK_DGRAM is the socket type to use for UDP sockets
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)

# refer to `pydoc struct`
HEADER_STRUCT = "".join([
    ">",  # network byte order
    "L",  # counter
    "B",  # message size
    "B",  # message type (0: word, 1: number)
])

PAYLOAD_WORD_TYPE = HEADER_STRUCT + "".join([
    "B",    # word length
    "100s", # string (at most 100 characters)
])
word_struct = struct.Struct(PAYLOAD_WORD_TYPE)

PAYLOAD_NUMBER_TYPE = HEADER_STRUCT + "".join([
    "B",  # number
    "B",  # 0: even, 1: odd
])
number_struct = struct.Struct(PAYLOAD_NUMBER_TYPE)

msg_counter = 0
while True:
    msg_counter += 1

    # prepare data to send
    if random.random() < 0.70:
        num = random.choice(range(256))
        is_even = num & 1
        data = number_struct.pack(msg_counter, 2, 1, num, is_even)
    else:
        string_len = random.choice(range(100))
        the_string = bytes("".join(random.choice(string.ascii_letters+" ") for i in range(string_len)), "ascii")
        data = word_struct.pack(msg_counter, 101, 0, string_len, the_string)

    # send the message
    sock.sendto(data, (HOST, PORT))

    # wait 200ms
    time.sleep(0.2)

Set it up

Wireshark Generic Dissector is a binary plugin, distributed as a .so file - please read the installation procedure. I've summarized what I did to install the plugin and the files we've written so far:

# download the plugin - be sure it's the right one for
# the version of Wireshark installed on your system
wget http://wsgd.free.fr/300X/generic.so.ubuntu.64.300X.tar.gz
# extract the file generic.so
unzip ./generic.so.ubuntu.64.300X.tar.gz
# install the shared object globally by putting in the right folder
sudo cp generic.so  /usr/lib/wireshark/plugins/3.0/epan
# install the dissector files in the right folder - the same of the shared object
sudo cp custom.wsgd /usr/lib/wireshark/plugins/3.0/epan
sudo cp custom.fdesc /usr/lib/wireshark/plugins/3.0/epan

Test drive

/images/wireshark-wsgd-with-dissector.png

As we can see by the screenshot, we are now able to see the content of the messages our application is sending to the server, without writing a single line of code (other than our application, obviously).

References


If you feel that this article helped you, feel free to share it! If you have questions, ask on Twitter, or offer me a coffee to let me keep writing these notes!

Stadia: the future of gaming?

Google unveiled its new game streaming service: Stadia.

Stadia, which tagline is "gather around", recognized that there are two "disconnected universes": streamers -people who play games for their audience- and viewers, that maybe cannot play the same games or just enjoy looking someone else's performances.

The company tries to combine both worlds by creating a game streaming service that is also integrated in Youtube.

Interesting parts

Other websites already covered the conference: I will write down something I found interesting or exciting.

You can access the platform by just pressing a "Play" button at the end of a videogame Youtube video - if you're using Google Chrome, obviously.

AMD designed a custom GPU, just for Stadia. Judging by its 10.7 teraflops, it is more powerful than the GPUs on current gen consoles. Developers can also use more than one GPU in their games, to make the games even more detailed in a transparent way for the user.

Stadia promises an "up to 4k 60fps" experience for the player, and all plays will be streamed on Youtube. The special "share" button on the custom controller should let creators (or random players) share their play and create a "state share" link, to let other people play the same portion of the creator's gameplay. Creators can also use Crowd Play to let their Youtube viewers join their games and better interact with them.

Every game on Stadia will be playable with existing controllers and every device that the user already owns -they will just interact with a streaming, so they won't need a powerful device to use the service.

This new platform now explains two major features of Google products I never really understood: Youtube's videogame channels (containers that automatically gather and categorize videos about specific games - see Sekiro's automatically generated channel as an example) and the WebUSB standard, that is only implemented by Chrome.

What's missing?

Stadia is not the first game streaming service in the market, and won't be the last. Hopefully it won't fail as hard as OnLive, but there are several issues that should be resolved, or mitigated, before the launch (in US, UK, Canada and most of Europe).

Let's start with the one I find more pressing: PS Now launched a week ago in Italy, with varying results. Dadobax (an Italian videogame youtuber) experienced a very important input lag while testing Bloodborne on a 100Mbps fiber connection. Will Stadia suffer the same problem? During the presentation, the Stadia representative said there will be a direct link between the ISP and the Stadia data centers, but I won't believe that everything works fine until I can try it. Other commenters note that, even if there will be no input lag, there is a risk of seeing video artifacts due to video streaming compression algorithms.

Another issue is the access to the service: We don't know how much it will cost, and which titles will be there. At least we know that Doom Eternal, Assassin's Creed Odyssey, NBA 2K19 and Shadow of the Tomb Raider will be playable on the platform. We don't know if users must buy titles on the Stadia Store even if they bought it on other stores - such as Odyssey's UPlay store-.

I'm also worried that Youtube is going to fill with a lot of digital waste: no one is interested in my gameplays (I have to admit I'm bad at videogames), so that footage won't be ever seen, but will still take some space on a random hard drive in the cloud. I hope the service won't store every gameplay ever played on Stadia.

Did you like the conference? Are you hyped? Are you critical? Let me know on Twitter !

Your blog should be rendered on the server

"You probably don't need a single-page application" - Plausible"

This blog post from Plausible just reminded me of a a pet peeve of mine: blogs that are built as single page applications. I don't like being greeted with a blank page because I don't want to execute whatever code you send to my browser.

There are several use cases for single page applications, but blogs are not one of them, for several reasons. Some of them are already explained in the article, and I'm going to reiterate them too, but I want to add a different perspective to the Plausible's short essay.

I just want to read content

A blog just contains text and some media (images, audio or video). We don't need to download some Javascript, then execute it to request the real content and then show it to the user - why can't I just download the content?

Another advantage is that you don't need to a lot of work to let search engines index your work. Do you think it is a non-issue? Hulu would like to have a word with you. Due to some problems with their client-side-only rendering, Google could not index them anymore, destroying Hulu's previous work on their targeted keywords.

I don't care about the tool you use to generate your blog, whether you build it on your laptop and push via FTP or use Wordpress to write and publish your new eessays - it's just fine, all those workflows are valid - they let your readers (me!) just get the content they want to read.

If you really need to build your entire website using javascript, then please, I beg you: setup Server-side Static Rendering (SSR) to create a static "base" version of the website. Most libraries/frameworks support SSR, so just study your framework's documentation and set it up.

Javascript should enhance the experience

I really love New York Times'-style interactive data visualizations: they are part of the content they want to show you, so it's OK to enable Javascript to enjoy those applets. Even New York Times articles, though, are just text that are _enhanced_ by the interactive applets - you still get the text. It's ok to require Javascript to load comments - especially if you use third-party services such as Disqus - because, y'know, comments are not a core part of the experience.

So, please, stop forcing javascript on your blog readers. Just let me read your content.


I hope you liked this new issue of Interesting Links, a column where I highlight interesting articles, with my own comments and thoughts. If you appreciated this small rant, tell me that on Twitter, or support me on Ko-fi.