Xfce

Subdomains
 

Building a Raspberry Pi Compute Module 4 NAS

  • May 13, 2022
  • Brian Tarricone

A few months ago, my venerable NETGEAR ReadyNAS NV+ died. Not sure what happened; it just shut off one day and refused to turn back on again.

For a while now, I’ve wanted to build my own NAS. I don’t like proprietary hardware that depends on a manufacturer for OS updates. But I also didn’t want to build a large, unsightly server, and burn more electricity on something that would be running 24/7. Granted, my NAS uses spinning-rust hard disk drives, which likely draw much more power than most low-power mainboards and CPUs, so maybe the power consumption concern is a bit overblown.

The 4th generation of the Raspberry Pi hardware has a single PCIe lane. In the retail model, this is wired to a USB3 chip. The Compute Module 4, however, allows board designers to do whatever they want with it. I’d hoped for a while that someone would build a board with at least four SATA ports. I was initially excited about a group called Wire Trustee, but they abandoned their hardware plans to instead build a VPN.

Then came Axzez, with their Interceptor carrier board, which was exactly what I was looking for, with five SATA ports.

Performance-wise, there are certainly better SBCs for this task. But I’m comfortable with the Raspberry Pi, and see it as a stable, long-term platform to build on.

What follows is a simple description of my build, and how it worked out.

Parts List

  • Axzez Interceptor carrier board
  • Axzez Interceptor board adapter (adapts the board for a mini-ITX form factor)
  • CM4 module (I ended up with a 4GB RAM/16GB eMMC/WiFi module, as that was all I could find available without having to wait the better part of a year for it to ship; ideally I wanted an 8GB RAM model, and didn’t care if it had WiFi or not)
  • Fractal Design Node 304 chassis
  • Seasonic SSP-300SFG power supply (SFX form factor)
  • Mains cable for the power supply (Seasonic doesn’t bundle one)
  • 4-pin Molex to dual 15-pin SATA power cable (if you have more than three hard drives)
  • SATA data cables (one for each drive)
  • CR2032 battery (the Interceptor board has a battery-backed clock)
  • Hard drives (I already had these from my previous NAS)

Verification and OS Setup

To start, I wanted to ensure the parts actually worked. I downloaded the latest 64-bit Raspberry Pi OS (minimal) image, and wrote it to a USB flash drive. Axzez provides their own OS image, which includes a patched kernel with support for the included 4-port ethernet switch. I don’t need this, so I preferred to stick with a more stock OS.

I mounted the boot partition, and configured things for headless operation, including connecting to a WiFi network (I wasn’t near an ethernet cable while doing this), enabling SSH, and creating a default user. I also added enable_uart=1 to /boot/config.txt so I can hook up the serial console; this could be useful if the board doesn’t boot for some reason.

I pulled the cover off the chassis, unpacked the power supply, and seated the CM4 module into the carrier board. I plugged the ATX power connector into the board, attached the power LED and power switch wires from the chassis to the board, and plugged the USB flash drive into one of the board’s USB ports. I plugged in the power supply and flipped the switch.

Everything turned on by itself, with the power LED glowing blue. I waited a few minutes, and tried to ssh to raspberrypi.local, surprised when I was greeted with a password prompt. Signing in presented me with a pretty standard-looking Raspberry Pi OS system.

Booting from USB is fine, but I wanted the OS on the internal eMMC storage. So I copied the OS image over from my laptop, and then used dd to write it to /dev/mmcblk0. I repeated the same steps as before to enable headless operation. In addition, since I’m not using Axzez’s OS image, there are a few settings I needed to add to /boot/config.txt:

1enable_uart=1
2dtparam=i2c_vc=on
3dtoverlay=i2c-rtc,rv3028,i2c0,addr=0x52

The first isn’t required, but could be useful. The second enables the VideoCore’s I2C bus (not sure if this is required, but the Axzez OS does this). The third loads an overlay that sets up the RTC (real-time clock) chip included on the board.

After that, I power-cycled after pulling the USB flash drive. Again I waited, and ssh’ed into the board. A quick grep through dmesg confirmed that the RTC setup worked:

1rtc-rv3028 0-0052: registered as rtc0
2rtc-rv3028 0-0052: hctosys: unable to read the hardware clock

(The error is because the clock has never been written to; after the next clean reboot the clock should have been initialized with the time at shutdown.)

While I was here, with everything disassembled and easily accessible, I figured it’d be a good idea to update the OS and install most of the software I’ll need later. So:

1sudo apt update
2sudo apt dist-upgrade
3sudo apt install neovim mdadm lvm2 smartmontools ifplugd nfs-kernel-server
4sudo apt purge vim-tiny

In my case that didn’t upgrade too much, as the OS image I had downloaded was fairly recent. But I rebooted at this point, just to ensure things still come up properly after installing new packages.

I use neovim as my primary text editor, so it made sense to install it here. mdadm and lvm2 are needed for managing my hard disk arrays. smartmontools monitors disk drive self-reported health stats. ifplugd is a system daemon that automatically configures wired network interfaces when ethernet cables are plugged in (or unplugged). nfs-kernel-server provides an NFS server, which is how I primarily access files on my NAS across the network. You might also want to install samba (another file sharing protocol used by Windows computers).

I also modified /etc/hostname to set the hostname I wanted (and changed the raspberrypi entry in /etc/hosts), and added an IP reservation in my router for the ethernet port’s MAC address to ensure it always gets the same IP address (more or less required for NFS mounts if I don’t want it to be a pain).

This is also a good time to run dpkg-reconfigure tzdata to set the desired time zone, and dpkg-reconfigure locales to set the correct locale.

At this point I noticed that occasionally the board wouldn’t reboot cleanly (via sudo reboot). Sometimes (I discovered this by watching output on the serial console), the kernel would panic on reboot, requring me to power-cycle the power supply to continue. Annoying, but not a showstopper.

Hardware Assembly

Now that I’d verified everything works, it was time to put the hardware together. The manual supplied with the chassis recommended removing the disk brackets before starting, and then installing the mainboard first, followed by the power supply, and then finally the disks.

Mainboard

The mainboard was relatively straighforward to install. First screw the Interceptor board onto the board adapter, with the ports flush against the side of the adapter. Then attach the board adapter to the chassis itself, after screwing the four metal standoffs into the chassis. Orient the board adapter so the ports face out the back of the chassis. Don’t attach the board adapter to the chassis first, or it will be really difficult to screw the board into the adapter later.

If you disconnected them, now’s a good time to reconnect the wires for the power switch and power LED.

Also insert the coin-sized battery into the mainboard.

Power Supply

Next comes the power supply. The power supply I chose came with an adapter bracket to adapt the SFX form factor to a case accepting an ATX power supply. I chose SFX because they are often smaller and quieter. After attaching the bracket, mount the power supply in the chassis, with the power supply’s fan facing downward toward the opening in the bottom of the chassis.

The chassis has a power cable extension that you can now plug into the power supply.

From here you can connect the 24-pin ATX power connector to the mainboard.

Now that they won’t get in the way, you can connect the three chassis fans to the fan connectors on the mainboard.

Attach the 4-pin Molex to dual SATA power adapter to the appropriate power supply lead. This power supply only gives us three SATA power connectors; if you have four or five drives, you’ll need the extra adapter.

Drives

Finally it’s time to connect the hard drives. The chassis will hold up to six drives, though the mainboard only has ports for five of them. I only had four drives at the time of my build, so I plugged four SATA data cables into the first four SATA ports on the mainboard.

The drives themselves were fairly straighforward to attach to the drive rails. I had to move one of the rubber grommets to one of the extra holes. I was only able to line up three screw holes on the drives I have, but that’s fine.

After placing the drive rails back into the chassis, I routed power and data cables to each, and tried to bundle the excess cable as well as I could.

Wrapping it Up

Before closing up the chassis, I plugged in power and ethernet, and flipped the power switch on the side of the power supply. To my relief, everything came up properly, as expected, including all three fans.

I used mdadm and vgchange to manually set up and mount my drive array to ensure everything was working properly (later I’d write a script to do it on boot).

I deleted /etc/wpa_supplicant/wpa_supplicant.conf to disable WiFi, since ethernet was working fine.

Finally, I closed up the chassis and replaced the thumb screws.

Benchmarking

I feel like I’d be somewhat remiss if I didn’t post some performance numbers. Take this with many many many grains of salt. There are some oddities in my setup: my RAID5 array is actually two RAID5 arrays, joined into one logical partition using LVM2. I did a lazy dd test for this, to an ext4 filesystem on the RAID5+LVM2 volume, like so:

1dd if=/dev/zero of=bench bs=4M count=10000 status=progress oflag=direct

This gave me sustained write speeds of 137 MB/s.

When reading it back, via:

1dd if=bench of=/dev/null bs=4M status=progress iflag=direct

… I get sustained read speeds of 394 MB/s.

I know this isn’t particularly scientific, and I’m sure there are flaws in this benchmarking process. I don’t particularly care; I’m merely just trying to get a vague idea as to whether or not things are working reasonably well. These speeds are more than sufficient for my purposes. I did not test any read/write scenarios more complicated than this, and I didn’t do any performance tuning whatsoever.

Final Thoughts

Overall I’m really happy with how this build turned out. Right now I am just manually managing the storage and network shares on the box, though I have considered installing something like Open Media Vault.

There are a few minor annoyances and omissions that would be nice for Axzez to implement in a future board revision:

  1. The fans don’t appear to be controllable or monitorable by the OS. The board supports three-wire fans, which should at least allow us to monitor fan speed (helpful both for control feedback as well as alerting if a fan has died). Controlling the speed could be done using the CM4’s builtin PWM hardware (maybe? though I believe there are only two channels), or by adding an external fan controller chip.
  2. There isn’t a way to flash the CM4 bootloader or modify the CM4’s boot settings. With a CM4 I/O devkit, you can place a jumper on the board to put it into usbboot mode, which allows you to do all this from a USB-connected computer, but the Interceptor board doesn’t provide this ability.
  3. I didn’t need to make use of the 4-port ethernet switch, but it would be great if Axzez could work to get the drivers for it upstreamed so it doesn’t require a custom kernel. At the very least, a DKMS package would be sufficient. To their credit, they set things up so single-port operation works just fine if you don’t use a patched kernel, which is fine for my purposes.
  4. It would be a nice addition to expose another pair of pins so I could connect the chassis’ HDD activity LED as well.
  5. Documenting how to enable the RTC chip on a vanilla Raspberry Pi OS would be nice (it wasn’t hard to figure out; I just read through their kernel patch).

While the Interceptor certainly isn’t meant to be a general-purpose board, exposing some of the unused GPIO pins could also be really useful (for example, I could probably build my own fan controller daughter board, or set up the HDD activity LED trigger myself). I’ve heard that the two 40-pin FFC connectors expose an I2C and two GPIOs each, but it’s not the most convenient thing to work with. (Having said that, I’ll take what I can get!)

One thing that I am incredibly happy with is that the board works with the stock Raspberry Pi OS. The last thing I want is to be stuck on an old version of the OS after the manufacturer loses interest in supporting it, which is pretty much inevitable for most products, and is the situation I ended up in with my old retail NAS. With the exception of the ethernet switch, everything on this board has upstream support, and the switch gracefully degrades to a single usable ethernet port when booted with the stock Raspberry Pi OS kernel.

As for the rest of the hardware:

  1. The chassis I chose doesn’t make it possible to access the drives without opening it up. I could have chosen a chassis that has hot-swap bays, but I personally don’t need this, so I didn’t bother. I’m actually not sure if the SATA controller on the board supports hot-swap.
  2. This chassis has USB3 ports on the front. Because the CM4’s single PCIe lane is taken up by the SATA ports and ethernet switch, it does not have USB3 capability. Presumably I could cut the USB3 connector off the chassis’ cable and use the ports as USB2 ports (the main board has a pin header for extra USB2 ports), but this isn’t a priority for me to set up.
  3. There’s currently no back panel on the chassis around all the ports. It’s not that big a deal, though I’m afraid of dust getting drawn in through there.
  4. I haven’t measured the power consumption of the power supply I chose. I think it should be reasonably efficient at the sub-100W power draw expected while idle, but it’s likely not as efficient as a smaller supply would be. I debated going the “pico PSU” route, but decided to stick with something more traditional for now.

I’m very pleased that Axzez decided to design and build this carrier board. I’ve been waiting for something like this ever since the CM4 was released, and the board did not disappoint.

Running a Modern Debian on the Netgear ReadyNAS NV+ v2

  • April 22, 2020
  • Brian Tarricone

I have an old ReadyNAS NV+ v2 (not to be confused with the SPARC-based v1) that has been my workhorse home NAS for a good 7 years now (previously I had an older Netgear NAS that suffered a hardware failure and needed to be replaced). It’s a great piece of hardware, but the software hasn’t seen a major update in many years, and the OS is based on Debian squeeze, which is several releases behind current stable, and is no longer maintained.

I recently tried to visit the NAS’s admin web UI, only to see a recently-updated Firefox throw up a big scary security warning, telling me that TLS 1.0 and 1.1 are disabled in the browser, but the server I’m trying to connect to (the NAS) doesn’t support TLS 1.2 or newer. It allowed me to bypass the warning for now, but eventually I expect that older TLS versions (just like SSLv3 before it) will be removed entirely.

No problem, I thought, as I ssh’ed into the NAS. As I expected, the issue was that Apache was linked against an old version of OpenSSL which lacks TLS 1.2 support. To avoid disturbing the system itself too much, I figured I’d use debootstrap to create a chroot with a more modern Debian version, and run the webserver (or a proxy) from there.

But that didn’t work out. The glibc versions in Debian testing, buster, stretch, and even jessie all require a newer kernel than what’s running on the ReadyNAS (2.6.31.8). I looked at some other OSes, like Alpine (which uses musl and wouldn’t have the same issue) and nixOS (which I expected might just be more flexible), but neither of them support the armv5-based CPU in the NV+ v2.

After a bunch of dead ends, I finally ended up natively building a newer version of OpenSSL, as well as HAProxy, on the NAS itself, and installed it to a private prefix. That ended up working, but was largely unsatisfying. Even before this, I’d lamented being unable to run newer versions of some software (offlineimap, for one thing) on the NAS.

So I started poking around to see if anyone has done a large, unsupported upgrade on this hardware. I didn’t find any concrete instructions, but I did find some encouraging information. I found someone who had cataloged all the hardware of the device, and had upstreamed support for the board into the Linux kernel, and had noted that nearly all the hardware (save the front-panel LCD) is now supported by the upstream kernel. I found a reference on Debian’s ARM EABI port page that the ReadyNAS NV+ v2 is specifically supported by the upstream kernel.

WARNING

First off, a big disclaimer:

This is inherently risky, and you can very easily brick your NAS to the point where it will be incredibly difficult to recover it. I recommend you open it up and make sure you can get the serial console working before proceeding with any of these steps (there are instructions in one of the above links).

Also remember that, while it is possible to connect your NAS’s drives to a computer running Linux in order to assemble the RAID/LVM array to pull data off, you should have a backup of all data to be safe.

Kernel

The first step is to get a more modern kernel running on the box. Unfortunately the ReadyNAS’s stock kernel is compiled without kexec support, so we can’t experiment without overwiting the one-and-only copy of the bootable kernel on the ReadyNAS’s flash chip.

TODO: see if the stock u-boot supports USB booting and how to get that to work.

The flash chip is partitioned like so:

1dev:    size   erasesize  name
2mtd0: 00180000 00020000 "u-boot"
3mtd1: 00020000 00020000 "u-boot-env"
4mtd2: 00600000 00020000 "uImage"
5mtd3: 01000000 00020000 "minirootfs"
6mtd4: 06800000 00020000 "jffs2"

The first partition contains the u-boot bootloader, with the second holding environment variables that control u-boot’s operation. The third (uImage) holds the kernel, and the fourth (minirootfs) holds a small initial RAM disk that kicks off the boot process before turning control over to the main rootfs. The fifth partition, jffs2, contains

  1. First back up all flash partitions on the NAS: for i in /dev/mtd*; do sudo dd if=$i of=$(basename $i); done Be sure to copy the resulting files off to another computer for safekeeping.
  2. apt install gcc-arm-linux-gnueabi u-boot-tools screen lrzsz
  3. Grab kernel of your choice (I used 5.6.6; here’s .config), and build with: make zImage ARCH=arm CROSS_COMPILE=arm-linux-gnueabi-
  4. Make u-boot image for kernel: mkimage -A arm -O linux -T kernel -C none -a 0x00008000 -e 0x00008000 -n Linux-5.6.6 -d /path/to/linux/arc/arm/boot/zImage kernel.img
  5. Extract the initrd file system image (from the backup of mtd3 created earlier): dd if=mtd3 bs=64 skip=1 | gunzip -c | sudo cpio -i (Strip the 64-byte uImage header, unzip, extract).
  6. Patch init to not try to load Netgear’s proprietary kernel modules (which are no longer needed): sed -i -e 's/insmod/ls /g' init
  7. Repackage the initrd: find . -type f >name-list && sudo cpio -o <name-list | gzip -9 -c >initrd.gz && rm name-list
  8. Make u-boot image for initrd: mkimage -A arm -O linux -T ramdisk -C gzip -a 0x00000000 -e 0x00000000 -n initrd -d initrd.gz initrd.img

Unlimited Vacation

  • January 28, 2018
  • Brian Tarricone

I usually don’t spend much time on LinkedIn, but I signed in to check on something, and noticed a post near the top of my feed that illustrated a misconception that bothered me. The CEO of a startup posted saying that he didn’t like the concept of “unlimited vacation”1 because (he believes) what ends up happening is that high performers suffer because they don’t take enough vacation, and under-performers get away with taking a ton of vacation.

Another poster commented that they didn’t like these types of policies because they’re just a way of allowing companies to get around state requirements to carry unused vacation days as a liability, and pay out those days when an employee leaves. (This is true, at least in California.)

I reject all of these problems and submit that an unlimited vacation policy – assuming employees actually do have the latitude to make use of it generously – is a net good for employees. As long as you create a healthy culture around it.

First up, the financial argument: frankly, I don’t care if it makes it easier for the companies financials or not. If an unlimited vacation policy does make it easier, that’s great for the company, but whether or not the policy (and its implementation) is actually good for employees is unrelated to that.

Under-performers: if they’re taking a lot of vacation and are not performing well, this is a failure of management. Why is the employee’s manager approving so much vacation when there is a performance problem? Why is this employee not on a PIP, or, failing that, why has the employee not been fired?

High performers: this is a bit more tricky, because you don’t want to demoralize or send mixed or confusing messages to high performers. One option is to enforce a minimum vacation policy on top of the unlimited maximum. Enforcement can range from simply deactivating an employee’s work accounts for a period of time to get them to take time off, all the way up to penalizing them at review time (lower raise or equity grant, delayed promotion, etc.). Better would be to simply promote a culture of healthy time-off practices. Employees will implicitly look to their managers for cues on what they should be doing in these instances, and if they see their manager taking a generous (but not abusive!) amount of time off, they’ll tend to do the same. This needs to be done at every level: the CEO needs to take sufficient vacations just as much as the rest of management and the individual contributors do.

At this point in my life, I would think of an accrued/fixed vacation plan as a big negative if I were considering an offer from a new company.

vacation refers to a policy where employees do not accrue vacation days based on time worked, or have any other kind of fixed number of vacation days (though the company often will close down for some number of public holidays). Employees are expected to take as much vacation as they’d like, with their manager’s approval.

  1. For those of you perhaps not familiar with the concept, unlimited 

Relaunch

  • December 25, 2013
  • Brian Tarricone

I finally decided to redo my website, and ditch WordPress in favor of a static site generator (I decided to use Jekyll). I also took the opportunity to simplify the design.

There’s a bit more to do, and likely some broken links here and there (not to mention some broken WP-to-markdown conversion), but it’s time I finally got this done.

In theory, I’ll start blogging again as well, but… we’ll see.

Relaunch

  • December 25, 2013
  • Brian Tarricone

I finally decided to redo my website, and ditch WordPress in favor of a static site generator (I decided to use Jekyll). I also took the opportunity to simplify the design.

There's a bit more to do, and likely some broken links here and there (not to mention some broken WP-to-markdown conversion), but it's time I finally got this done.

In theory, I'll start blogging again as well, but... we'll see.

Google TV and Native Libraries

  • October 5, 2012
  • Brian Tarricone

The Google TV runs a fairly unusual flavor of Android (at least the 2nd-gen ARM-based devices). I have a Sony Internet Player (not the Blu-Ray version), so what I'm about to write applies to that device, but maybe not any other, though it stands to reason that the other ARM-based GTVs are the same.

Phone-and-tablet Android doesn't look much like a Linux desktop or server system. It uses the Linux kernel, to be sure, but a lot of the userspace libraries are custom. It even does not use Glibc, but a C library that Google wrote called Bionic. It's fairly stripped down and lightweight, and while it implements most things you might need out of a standard libc, it does not pretend to be POSIX compliant.

From some simple investigation, I've learned that the Sony GTV is running a EGlibc 2.12.2, and probably a mostly-unmodified version of it. Someone with an @google.com email address stated that the reason for this was that they couldn't get Chrome running against the Honeycomb version of Bionic.

Due to this, a Native Development Kit (NDK) is not yet available for the GTV. So the question remains: can we hack one together that works? The answer is... sorta.

With this knowledge in hand, I built a relatively standard arm-linux-gnueabi toolchain using crosstool-ng. Then I 'adb pull'-ed the contents of /system/lib from my GTV and merged them with the new toolchain's sysroot, copied some headers out of a stock NDK, and ended up with a sysroot that approximates what you'd find in platforms/ in a stock NDK, just without Bionic, and with EGlibc.

I didn't get to modifying the NDK's build system (it would need to be changed to find the new toolchain), so I built my native library manually, and got a simple "hello world" type app with a native lib. (It just calls a native method that returns a string, and displays the string on a label.)

One annoying thing is that the ABI string in the Sony GTV is set to "none", so you have to unpack the APK, rename lib/armeabi-v7a/ to lib/none/, and repack and resign it. All of this means that this would be strictly hobbyist for now: no chance that you could distribute something in the Play Store. Not only does Google have to release an officially-working NDK, but they need to decide on an ABI string, and get Sony (etc.) to push updates out to their customers that update build.prop on the devices with the new ABI string.

There's also the possibility that Google doesn't want to create and officially support that much native drift between phone-and-tablet Android and GTV Android, and will wait until manufacturers are running a more-stock Android 4.x on GTV (that uses the 4.x version of Bionic) before releasing an NDK that works... in which case we're at the mercy of Sony for updates, unless XDA or CyanogenMod wants to take a crack at it. My money's on this scenario, unfortunately.

One of the main things people have been screaming for is a version of XBMC that runs on GTV. I have been able to get it to build using my hacked-together toolchain, but not actually to run. I ran into problems with runtime linking: the built binaries depend on a shared libstdc++ and libgcc_s, neither of which appear to be included on the GTV's filesystem. I tried including them in the APK, but, weirdly, when the GTV unpacks the native libs from the APK at install time, it discards those two libraries. Static linking of those two may not be possible since XBMC's APK includes a bunch of native libs. A possible solution would be to build all of libxbmc.so's dependencies as static libs, and then just make one big static library.

But I haven't had time to work on this over the past couple weeks...

Google TV and Native Libraries

  • October 5, 2012
  • Brian Tarricone

The Google TV runs a fairly unusual flavor of Android (at least the 2nd-gen ARM-based devices). I have a Sony Internet Player (not the Blu-Ray version), so what I’m about to write applies to that device, but maybe not any other, though it stands to reason that the other ARM-based GTVs are the same.

Phone-and-tablet Android doesn’t look much like a Linux desktop or server system. It uses the Linux kernel, to be sure, but a lot of the userspace libraries are custom. It even does not use Glibc, but a C library that Google wrote called Bionic. It’s fairly stripped down and lightweight, and while it implements most things you might need out of a standard libc, it does not pretend to be POSIX compliant.

Due to this, a Native Development Kit (NDK) is not yet available for the GTV. So the question remains: can we hack one together that works? The answer is… sorta.

From some simple investigation, I’ve learned that the Sony GTV is running a EGlibc 2.12.2, and probably a mostly-unmodified version of it. Someone with an @google.com email address stated that the reason for this was that they couldn’t get Chrome running against the Honeycomb version of Bionic.

With this knowledge in hand, I built a relatively standard arm-linux-gnueabi toolchain using crosstool-ng. Then I ‘adb pull’-ed the contents of /system/lib from my GTV and merged them with the new toolchain’s sysroot, copied some headers out of a stock NDK, and ended up with a sysroot that approximates what you’d find in platforms/ in a stock NDK, just without Bionic, and with EGlibc.

I didn’t get to modifying the NDK’s build system (it would need to be changed to find the new toolchain), so I built my native library manually, and got a simple “hello world” type app with a native lib. (It just calls a native method that returns a string, and displays the string on a label.)

One annoying thing is that the ABI string in the Sony GTV is set to “none”, so you have to unpack the APK, rename lib/armeabi-v7a/ to lib/none/, and repack and resign it. All of this means that this would be strictly hobbyist for now: no chance that you could distribute something in the Play Store. Not only does Google have to release an officially-working NDK, but they need to decide on an ABI string, and get Sony (etc.) to push updates out to their customers that update build.prop on the devices with the new ABI string.

There’s also the possibility that Google doesn’t want to create and officially support that much native drift between phone-and-tablet Android and GTV Android, and will wait until manufacturers are running a more-stock Android 4.x on GTV (that uses the 4.x version of Bionic) before releasing an NDK that works… in which case we’re at the mercy of Sony for updates, unless XDA or CyanogenMod wants to take a crack at it. My money’s on this scenario, unfortunately.

One of the main things people have been screaming for is a version of XBMC that runs on GTV. I have been able to get it to build using my hacked-together toolchain, but not actually to run. I ran into problems with runtime linking: the built binaries depend on a shared libstdc++ and libgcc_s, neither of which appear to be included on the GTV’s filesystem. I tried including them in the APK, but, weirdly, when the GTV unpacks the native libs from the APK at install time, it discards those two libraries. Static linking of those two may not be possible since XBMC’s APK includes a bunch of native libs. A possible solution would be to build all of libxbmc.so’s dependencies as static libs, and then just make one big static library.

But I haven’t had time to work on this over the past couple weeks…

Google TV and Native Libraries

  • October 5, 2012
  • Brian Tarricone

The Google TV runs a fairly unusual flavor of Android (at least the 2nd-gen ARM-based devices). I have a Sony Internet Player (not the Blu-Ray version), so what I’m about to write applies to that device, but maybe not any other, though it stands to reason that the other ARM-based GTVs are the same.

Phone-and-tablet Android doesn’t look much like a Linux desktop or server system. It uses the Linux kernel, to be sure, but a lot of the userspace libraries are custom. It even does not use Glibc, but a C library that Google wrote called Bionic. It’s fairly stripped down and lightweight, and while it implements most things you might need out of a standard libc, it does not pretend to be POSIX compliant.

From some simple investigation, I’ve learned that the Sony GTV is running a EGlibc 2.12.2, and probably a mostly-unmodified version of it. Someone with an @google.com email address stated that the reason for this was that they couldn’t get Chrome running against the Honeycomb version of Bionic.

Due to this, a Native Development Kit (NDK) is not yet available for the GTV. So the question remains: can we hack one together that works? The answer is… sorta.

With this knowledge in hand, I built a relatively standard arm-linux-gnueabi toolchain using crosstool-ng. Then I ‘adb pull’-ed the contents of /system/lib from my GTV and merged them with the new toolchain’s sysroot, copied some headers out of a stock NDK, and ended up with a sysroot that approximates what you’d find in platforms/ in a stock NDK, just without Bionic, and with EGlibc.

I didn’t get to modifying the NDK’s build system (it would need to be changed to find the new toolchain), so I built my native library manually, and got a simple “hello world” type app with a native lib. (It just calls a native method that returns a string, and displays the string on a label.)

One annoying thing is that the ABI string in the Sony GTV is set to “none”, so you have to unpack the APK, rename lib/armeabi-v7a/ to lib/none/, and repack and resign it. All of this means that this would be strictly hobbyist for now: no chance that you could distribute something in the Play Store. Not only does Google have to release an officially-working NDK, but they need to decide on an ABI string, and get Sony (etc.) to push updates out to their customers that update build.prop on the devices with the new ABI string.

There’s also the possibility that Google doesn’t want to create and officially support that much native drift between phone-and-tablet Android and GTV Android, and will wait until manufacturers are running a more-stock Android 4.x on GTV (that uses the 4.x version of Bionic) before releasing an NDK that works… in which case we’re at the mercy of Sony for updates, unless XDA or CyanogenMod wants to take a crack at it. My money’s on this scenario, unfortunately.

One of the main things people have been screaming for is a version of XBMC that runs on GTV. I have been able to get it to build using my hacked-together toolchain, but not actually to run. I ran into problems with runtime linking: the built binaries depend on a shared libstdc++ and libgcc_s, neither of which appear to be included on the GTV’s filesystem. I tried including them in the APK, but, weirdly, when the GTV unpacks the native libs from the APK at install time, it discards those two libraries. Static linking of those two may not be possible since XBMC’s APK includes a bunch of native libs. A possible solution would be to build all of libxbmc.so’s dependencies as static libs, and then just make one big static library.

But I haven’t had time to work on this over the past couple weeks…

Techie TODO

  • April 16, 2012
  • Brian Tarricone

In no particular order.

  • Start blogging again.
  • Suck less at Javascript, even if it’s a shitty language.
  • Learn jQuery, even if it’s just a library to make a shitty language less shitty.
  • Learn Rails properly.
  • Get back into open source dev.
  • Find a project/idea I can potentially monetize, and build and launch it.
  • Throw out my website entirely and start from scratch.
  • Stop running MacOSX all the time on my laptop and get back to using Linux as my primary desktop OS.

Techie TODO

  • April 16, 2012
  • Brian Tarricone

In no particular order.

  • Start blogging again.

  • Suck less at Javascript, even if it's a shitty language.

  • Learn jQuery, even if it's just a library to make a shitty language less shitty.

  • Learn Rails properly.

  • Get back into open source dev.

  • Find a project/idea I can potentially monetize, and build and launch it.

  • Throw out my website entirely and start from scratch.

  • Stop running MacOSX all the time on my laptop and get back to using Linux as my primary desktop OS.