Google TV and Native Libraries
The Google TV runs a fairly unusual flavor of Android (at least the 2nd-gen ARM-based devices). I have a Sony Internet Player (not the Blu-Ray version), so what I’m about to write applies to that device, but maybe not any other, though it stands to reason that the other ARM-based GTVs are the same.
Phone-and-tablet Android doesn’t look much like a Linux desktop or server system. It uses the Linux kernel, to be sure, but a lot of the userspace libraries are custom. It even does not use Glibc, but a C library that Google wrote called Bionic. It’s fairly stripped down and lightweight, and while it implements most things you might need out of a standard libc, it does not pretend to be POSIX compliant.
Due to this, a Native Development Kit (NDK) is not yet available for the GTV. So the question remains: can we hack one together that works? The answer is… sorta.
From some simple investigation, I’ve learned that the Sony GTV is running a EGlibc 2.12.2, and probably a mostly-unmodified version of it. Someone with an @google.com email address stated that the reason for this was that they couldn’t get Chrome running against the Honeycomb version of Bionic.
With this knowledge in hand, I built a relatively standard arm-linux-gnueabi toolchain using crosstool-ng. Then I ‘adb pull’-ed the contents of /system/lib from my GTV and merged them with the new toolchain’s sysroot, copied some headers out of a stock NDK, and ended up with a sysroot that approximates what you’d find in platforms/ in a stock NDK, just without Bionic, and with EGlibc.
I didn’t get to modifying the NDK’s build system (it would need to be changed to find the new toolchain), so I built my native library manually, and got a simple “hello world” type app with a native lib. (It just calls a native method that returns a string, and displays the string on a label.)
One annoying thing is that the ABI string in the Sony GTV is set to “none”, so you have to unpack the APK, rename lib/armeabi-v7a/ to lib/none/, and repack and resign it. All of this means that this would be strictly hobbyist for now: no chance that you could distribute something in the Play Store. Not only does Google have to release an officially-working NDK, but they need to decide on an ABI string, and get Sony (etc.) to push updates out to their customers that update build.prop on the devices with the new ABI string.
There’s also the possibility that Google doesn’t want to create and officially support that much native drift between phone-and-tablet Android and GTV Android, and will wait until manufacturers are running a more-stock Android 4.x on GTV (that uses the 4.x version of Bionic) before releasing an NDK that works… in which case we’re at the mercy of Sony for updates, unless XDA or CyanogenMod wants to take a crack at it. My money’s on this scenario, unfortunately.
One of the main things people have been screaming for is a version of XBMC that runs on GTV. I have been able to get it to build using my hacked-together toolchain, but not actually to run. I ran into problems with runtime linking: the built binaries depend on a shared libstdc++ and libgcc_s, neither of which appear to be included on the GTV’s filesystem. I tried including them in the APK, but, weirdly, when the GTV unpacks the native libs from the APK at install time, it discards those two libraries. Static linking of those two may not be possible since XBMC’s APK includes a bunch of native libs. A possible solution would be to build all of libxbmc.so’s dependencies as static libs, and then just make one big static library.
But I haven’t had time to work on this over the past couple weeks…
Google TV and Native Libraries
The Google TV runs a fairly unusual flavor of Android (at least the 2nd-gen ARM-based devices). I have a Sony Internet Player (not the Blu-Ray version), so what I’m about to write applies to that device, but maybe not any other, though it stands to reason that the other ARM-based GTVs are the same.
Phone-and-tablet Android doesn’t look much like a Linux desktop or server system. It uses the Linux kernel, to be sure, but a lot of the userspace libraries are custom. It even does not use Glibc, but a C library that Google wrote called Bionic. It’s fairly stripped down and lightweight, and while it implements most things you might need out of a standard libc, it does not pretend to be POSIX compliant.
From some simple investigation, I’ve learned that the Sony GTV is running a EGlibc 2.12.2, and probably a mostly-unmodified version of it. Someone with an @google.com email address stated that the reason for this was that they couldn’t get Chrome running against the Honeycomb version of Bionic.
Due to this, a Native Development Kit (NDK) is not yet available for the GTV. So the question remains: can we hack one together that works? The answer is… sorta.
With this knowledge in hand, I built a relatively standard arm-linux-gnueabi toolchain using crosstool-ng. Then I ‘adb pull’-ed the contents of /system/lib from my GTV and merged them with the new toolchain’s sysroot, copied some headers out of a stock NDK, and ended up with a sysroot that approximates what you’d find in platforms/ in a stock NDK, just without Bionic, and with EGlibc.
I didn’t get to modifying the NDK’s build system (it would need to be changed to find the new toolchain), so I built my native library manually, and got a simple “hello world” type app with a native lib. (It just calls a native method that returns a string, and displays the string on a label.)
One annoying thing is that the ABI string in the Sony GTV is set to “none”, so you have to unpack the APK, rename lib/armeabi-v7a/ to lib/none/, and repack and resign it. All of this means that this would be strictly hobbyist for now: no chance that you could distribute something in the Play Store. Not only does Google have to release an officially-working NDK, but they need to decide on an ABI string, and get Sony (etc.) to push updates out to their customers that update build.prop on the devices with the new ABI string.
There’s also the possibility that Google doesn’t want to create and officially support that much native drift between phone-and-tablet Android and GTV Android, and will wait until manufacturers are running a more-stock Android 4.x on GTV (that uses the 4.x version of Bionic) before releasing an NDK that works… in which case we’re at the mercy of Sony for updates, unless XDA or CyanogenMod wants to take a crack at it. My money’s on this scenario, unfortunately.
One of the main things people have been screaming for is a version of XBMC that runs on GTV. I have been able to get it to build using my hacked-together toolchain, but not actually to run. I ran into problems with runtime linking: the built binaries depend on a shared libstdc++ and libgcc_s, neither of which appear to be included on the GTV’s filesystem. I tried including them in the APK, but, weirdly, when the GTV unpacks the native libs from the APK at install time, it discards those two libraries. Static linking of those two may not be possible since XBMC’s APK includes a bunch of native libs. A possible solution would be to build all of libxbmc.so’s dependencies as static libs, and then just make one big static library.
But I haven’t had time to work on this over the past couple weeks…
Techie TODO
In no particular order.
- Start blogging again.
- Suck less at Javascript, even if it’s a shitty language.
- Learn jQuery, even if it’s just a library to make a shitty language less shitty.
- Learn Rails properly.
- Get back into open source dev.
- Find a project/idea I can potentially monetize, and build and launch it.
- Throw out my website entirely and start from scratch.
- Stop running MacOSX all the time on my laptop and get back to using Linux as my primary desktop OS.
Techie TODO
In no particular order.
-
Start blogging again.
-
Suck less at Javascript, even if it’s a shitty language.
-
Learn jQuery, even if it’s just a library to make a shitty language less shitty.
-
Learn Rails properly.
-
Get back into open source dev.
-
Find a project/idea I can potentially monetize, and build and launch it.
-
Throw out my website entirely and start from scratch.
-
Stop running MacOSX all the time on my laptop and get back to using Linux as my primary desktop OS.
On MeeGo
Whatever MeeGo was, it never made it into the open source mainstream, which I consider to consist of projects actively worked on by volunteers and companies alike, and didn’t manage to become an project attractive enough for individual open source enthusiasts not driven by money to make substantial contributions.
There is enough room to speculate over the reasons why this is. My personal take is that MeeGo failed to be a successful open source project because the corporate commitment to the open source idea was not strong enough and expectations were too high right from the start.
Open source projects start with an idea and evolve into a proper product over time. Sometimes they are developed incrementally, sometimes it happens that big parts of them are replaced all at once. The idea behind an open source project may be huge but they all start off with baby steps. MeeGo, however, wasn’t supposed to. The idea behind MeeGo was big, and due to market pressure it was expected to become complete and successful quickly. It occurs to me that the idea was to sort of guarantee success by providing the project with professional, corporate governance.
I think everyone who started working on open source as a hobby knows that this is something a lot of hackers don’t are comfortable with. An open source project under corporate leadership may easily suffer from top-down decisions that give developers the feeling of working in a restrictive environment rather than a playful one. I’m not surprised by the fact that the only people I know ever contributed to MeeGo worked either for/with Intel or Nokia.
It isn’t the technologies, tools or frameworks that are reason for the failure of MeeGo. KDE and GNOME are large and successful projects based on Qt, GTK+, Clutter, D-Bus-based desktop middleware etc. What has lead to failure in my eyes—-the eyes of an outsider, opn source developer and potential user—-is that the dynamic nature of open source projects conflicted with corporate expectations and hopes for a quick success they needed so badly. Add to that the pressure on Nokia, the corporate culture clash of two global information technology companies and the fast pacing world of mobile devices and interfaces, and you have a pretty explosive mixture.
Personally, I won’t place my bets on Tizen. I can only imagine how frustrated and disappointed everyone who tried to get involved may be today, even those who contributed to MeeGo as part of their work for Intel, Nokia or one of the many smaller companies that make up the corporate part of the open source ecosystem. It is also frustrating for me as a developer aiming for a job in open source and desktop/mobile/UI technologies.
I guess the mobile open source platform we are all hoping for will be there some day. But it will only appear slowly, driven by the efforts of enthusiastic individuals with big ideas, realistic expectations and sound knowledge of the pace and dynamics with which open source projects grow and mature. This of course relies on open hardware and I don’t know enough about the manufacturing industry to predict the availability of open, hackable devices in the near future.
In the meantime, the best we can do as open source software developers is improve the base OS level and innovate in desktop/mobile/UI technologies by experimenting with new ideas and extending existing frameworks. I’ll help where I can.
Code Comments
I’m a bit of a minimalist when it comes to commenting my code. This is probably in some ways a bad thing; code that is completely obvious to me in its function may be difficult to understand for others, and I’m often not so great at realizing this on the first pass.
So that leads me to the purpose of code comments:
The purpose of commenting your code is to inform readers of that code what a section of nontrivial or non-obvious code does.
At least, this is my definition. Opinions differ, I’m sure. I might also add to that a clarification: “readers” in this case may include yourself. Code you wrote may even be incomprehensible to you if a decent amount of time has passed.
From this definition you can also infer something else, that I believe it’s unnecessary to comment obvious code. In fact, I’d argue that it’s harmful to comment obvious code, because you’re making it harder to follow, and you’re adding a barrier in front of the reader being easily able to distinguish between trivial and nontrivial code at a glance. You also increase the length of the code fragment, which may make it more difficult to read and understand in its entirety (if you can’t fit the entire fragment on one screen, you’ll have to scroll back and forth to see the entire thing).
However, too often – very often, it turns out – I see things like the following:
And one of my favorites:
(Yes, I actually have seen something very similar to that, though I don’t remember what the label text was.)
How do these comments actually add anything useful to the file? Every time I see one of these, a little part of me dies inside.
Now, the last one is just silly. Even someone who has never developed using the gtk+ UI toolkit can figure out what that line of code does without the comment. If you can’t, then a code comment there probably isn’t going to be enough to help you overall in any case.
The middle one is equally silly, though it’s understandable that someone
might not know that g_free()
is the glib equivalent of free()
. However,
consider your audience: is an extra line of code for a comment really
useful here?
The first one is not quite so easy for me to dismiss. It presupposes a few bits of knowledge:
- Understanding of what reference-counted memory management is.
- Familiarity with the “ref/unref” pair, as opposed to only being exposed to something like the OpenStep “retain/release” (or even the COM/XPCOM “AddRef/Release”) terminology.
- At least passing knowledge of what a GObject is.
Now, for code that makes heavy use of reference counting, I think presupposing #1 is not unreasonable. In this case, it doesn’t matter: the comment as presented will not help you if you don’t know what reference counting is.
Points #2 and #3 depend on your goals and potential audience. If you
think that a decent number of readers may not be familiar with the
“ref/unref” terminology, “take a reference” is probably enough to
generate an “oh, duh!” moment in the reader’s head. As for #3, unless
you intend your code to be able to act as a sort of GObject tutorial,
that is, something that people aspiring to learn GObject programming
might want to read, I think the comment there does not serve people
unfamiliar with GObject. Regardless, most GObject-using code will
probably be pretty confusing to someone who doesn’t know GObject, so
whether or not you should comment g_object_ref()
is going to be the
least of your worries.
Now, I’m not going to claim that my code commenting is perfect… far from it. I could certainly stand to sprinkle comments a bit more liberally throughout my code. I tend to only comment public API (and then just a description of what the function does, not how it does it), and code fragments that are really nontrivial1 and potentially hard to understand.
But there has to be a happy medium somewhere. While too-infrequent commenting can certainly make code harder to understand, I’d argue that too-frequent commenting is worse. It’s sorta like “the boy who cried wolf” in the sense that comments draw my eyes to them as a way of saying, “pay attention! This bit here is important!” (or tricky, or whatever). Overuse of comments just makes me start skipping over all of them, useful or otherwise.
of comments. I generally prefer clear code over neat hacks, even if the neat hack represents a reduction in lines of code or a moderate increase in performance. If I write a section of code and then look at it again and see that it looks too complex, I’ll usually try to immediately rewrite it to be simpler.
-
It’s worth noting here that this point further reduces my volume ↩
Code Comments
I’m a bit of a minimalist when it comes to commenting my code. This is probably in some ways a bad thing; code that is completely obvious to me in its function may be difficult to understand for others, and I’m often not so great at realizing this on the first pass.
So that leads me to the purpose of code comments:
The purpose of commenting your code is to inform readers of that code what a section of nontrivial or non-obvious code does.
At least, this is my definition. Opinions differ, I’m sure. I might also add to that a clarification: “readers” in this case may include yourself. Code you wrote may even be incomprehensible to you if a decent amount of time has passed.
From this definition you can also infer something else, that I believe it’s unnecessary to comment obvious code. In fact, I’d argue that it’s harmful to comment obvious code, because you’re making it harder to follow, and you’re adding a barrier in front of the reader being easily able to distinguish between trivial and nontrivial code at a glance. You also increase the length of the code fragment, which may make it more difficult to read and understand in its entirety (if you can’t fit the entire fragment on one screen, you’ll have to scroll back and forth to see the entire thing).
However, too often — very often, it turns out — I see things like the following:
/* take a reference */ g_object_ref(object);
/* free string */ g_free(str);
And one of my favorites:
/* set the label text to "Time Left:" */ gtk_label_set_text(GTK_LABEL(label), "Time Left:");
(Yes, I actually have seen something very similar to that, though I don’t remember what the label text was.)
How do these comments actually add anything useful to the file? Every time I see one of these, a little part of me dies inside.
Now, the last one is just silly. Even someone who has never developed using the gtk+ UI toolkit can figure out what that line of code does without the comment. If you can’t, then a code comment there probably isn’t going to be enough to help you overall in any case.
The middle one is equally silly, though it’s understandable that someone might not know that g_free() is the glib equivalent of free(). However, consider your audience: is an extra line of code for a comment really useful here?
The first one is not quite so easy for me to dismiss. It presupposes a few bits of knowledge:
- Understanding of what reference-counted memory management is.
- Familiarity with the “ref/unref” pair, as opposed to only being exposed to something like the OpenStep “retain/release” (or even the COM/XPCOM “AddRef/Release”) terminology
- At least passing knowledge of what a GObject is
Now, for code that makes heavy use of reference counting, I think presupposing #1 is not unreasonable. In this case, it doesn’t matter: the comment as presented will not help you if you don’t know what reference counting is.
Points #2 and #3 depend on your goals and potential audience. If you think that a decent number of readers may not be familiar with the “ref/unref” terminology, “take a reference” is probably enough to generate an “oh, duh!” moment in the reader’s head. As for #3, unless you intend your code to be able to act as a sort of GObject tutorial, that is, something that people aspiring to learn GObject programming might want to read, I think the comment there does not serve people unfamiliar with GObject. Regardless, most GObject-using code will probably be pretty confusing to someone who doesn’t know GObject, so whether or not you should comment g_object_ref() is going to be the least of your worries.
Now, I’m not going to claim that my code commenting is perfect… far from it. I could certainly stand to sprinkle comments a bit more liberally throughout my code. I tend to only comment public API (and then just a description of what the function does, not how it does it), and code fragments that are really nontrivial[1] and potentially hard to understand.
But there has to be a happy medium somewhere. While too-infrequent commenting can certainly make code harder to understand, I’d argue that too-frequent commenting is worse. It’s sorta like “the boy who cried wolf” in the sense that comments draw my eyes to them as a way of saying, “pay attention! This bit here is important!” (or tricky, or whatever). Overuse of comments just makes me start skipping over all of them, useful or otherwise.
[1] It’s worth noting here that this point further reduces my volume of comments. I generally prefer clear code over neat hacks, even if the neat hack represents a reduction in lines of code or a moderate increase in performance. If I write a section of code and then look at it again and see that it looks too complex, I’ll usually try to immediately rewrite it to be simpler.
Ruby for Web Development
I recently started a new web dev project, and decided to use it to better learn Ruby. However, I don’t want to use Rails. I’d like to keep it simple. I also prefer to know a lot about the inner workings of a particular technology before I go and use a large framework that hides a bunch of details from me.
However, I want to use ActiveRecord.
So, ActiveRecord. I install it on my laptop with “emerge ruby-activerecord”, and there I go. One “require ‘activerecord’” in my script later, and, awesome, it starts working.
Then I start working on my web host (DreamHost, if you’re wondering). It can’t find ActiveRecord. But it’s obviously installed, because I know DH supports Rails out of the box, and I don’t think you can have an install of Rails without ActiveRecord. So I poke, and then realise it might be installed as a Ruby “gem.” Ok, so I put a “require ‘rubygems’” above my activerecord require. Nice, now it works. Then I think, well, what if I put this somewhere that doesn’t require rubygems? Not hard to work around automatically:
``begin require 'activerecord' rescue LoadError require 'rubygems' require 'activerecord' end``
Nice, ok, that works. It’s probably a foolish micro-optimisation, but whatever.
Then I notice… ugh, this is super slow. Even on the web host, it can take a good two seconds for the “require ‘activerecord’” statement to execute. Yeah, I know, ruby is kinda slow. But 2 extra seconds each time someone hits basically any page of the website? Ugh.
So…. FastCGI. I know DH supports it, so I head over to the control panel and enable it for the domain I’m working on, and start googling around to figure out how to use FastCGI in a ruby script.
Unfortunately, there aren’t too many resources on this. Fortunately I found a couple sample dispatch scripts, one of which I ended up basing mine off of.
But then there was a problem. Inside my app, I use ruby’s CGI class to access CGI form variables and other stuff. Since the FastCGI stuff overrides and partially replaces ruby’s internal CGI class, there’s a problem. Doing “cgi = CGI.new” inside a ruby script that’s being served through FastCGI throws a weird exception. But I wanted to try to retain compatibility for non-FastCGI mode. And I couldn’t figure out how to get the ‘cgi’ variable from the ruby dispatch script into my app’s script, since I was using ‘eval’ to run my script. The dispatch script I saw used some weird Binding voodoo. Up at the top level we have:
``def getBinding(cgi, env) return binding end``
I had no idea what that was doing, so I looked it up. Apparently the built-in “binding” function returns a Binding object that describes the current execution context, including local variables and function/method arguments. Ok, that seems really powerful and cool. So I look down to the sample dispatch script’s eval statement, and I see:
``eval File.open(script).read, getBinding(cgi, cgi.env_table)``
Ok, so it appears it tries to eval the script while providing an execution context that contains just ‘cgi’ and one of its member vars. I only sorta understand this. So I ditched the “cgi = CGI.new” line in my app’s script. But I got a NameError when just trying to use ‘cgi’. Huh? What’s going on? So I get rid of the getBinding() call entirely, and just let it use the current execution context, and suddenly everything works right. Weird.
Well, sorta. Now, remember, I want to preserve compatibility with running as a normal CGI. So the normal CGI needs to create its own ‘cgi’ object, but the FastCGI one should just use the one from the dispatch script. So I came up with this:
``begin if !cgi.nil? mycgi = cgi end rescue NameError require 'cgi' mycgi = CGI.new end``
Ok, that seemed to work ok. After that block, ‘mycgi’ should be usable as a CGI/FCGI::CGI object regardless of which mode it’s running under.
So I play around a bit more, and suddenly notice that my POST requests have stopped working. I dig into it a bit, and realise that my POST requests are actually just fine. What’s happening is that, somehow, the FCGI::CGI object completely ignores $QUERY_STRING on a POST request, while ruby’s normal CGI object will take care of it and merge it with the POST data variables. You see, to make my URLs pretty, I have normal URLs rewritten such that the script sees “page=whatever” in the query string. So when I did a POST, the page= would get lost, and so the POST would end up fetching the root web page rather than the one that should be receiving the form variables. I’m not sure if this is “normal” behavior, or if the version of the fcgi ruby module on DreamHost has a bug. Regardless, we need a workaround. So I go back to my last code snippet, and hack something together:
``begin if !cgi.nil? if cgi.env_table['REQUEST_METHOD'] == 'POST' CGI.parse(cgi.env_table['QUERY_STRING']).each do |k,v| cgi.params[k] = v end end mycgi = cgi end rescue NameError require 'cgi' mycgi = CGI.new end``
Ick. But at least it works.
So far, I’m liking ruby quite a lot. It’s a beautiful language, and seems well-suited for this kind of work, especially since I want to get something that works up and running relatively quickly.
We’ll see, however, how many more gotchas I run into.
Ruby for Web Development
I recently started a new web dev project, and decided to use it to better learn Ruby. However, I don’t want to use Rails. I’d like to keep it simple. I also prefer to know a lot about the inner workings of a particular technology before I go and use a large framework that hides a bunch of details from me.
However, I want to use ActiveRecord.
So, ActiveRecord. I install it on my laptop with “emerge ruby-activerecord”, and there I go. One “require ‘activerecord’” in my script later, and, awesome, it starts working.
Then I start working on my web host (DreamHost, if you’re wondering). It can’t find ActiveRecord. But it’s obviously installed, because I know DH supports Rails out of the box, and I don’t think you can have an install of Rails without ActiveRecord. So I poke, and then realise it might be installed as a Ruby “gem.” Ok, so I put a “require ‘rubygems’” above my activerecord require. Nice, now it works. Then I think, well, what if I put this somewhere that doesn’t require rubygems? Not hard to work around automatically:
begin
require 'activerecord'
rescue LoadError
require 'rubygems'
require 'activerecord'
end
Nice, ok, that works. It’s probably a foolish micro-optimisation, but whatever.
Then I notice… ugh, this is super slow. Even on the web host, it can take a good two seconds for the “require ‘activerecord’” statement to execute. Yeah, I know, ruby is kinda slow. But 2 extra seconds each time someone hits basically any page of the website? Ugh.
So…. FastCGI. I know DH supports it, so I head over to the control panel and enable it for the domain I’m working on, and start googling around to figure out how to use FastCGI in a ruby script.
Unfortunately, there aren’t too many resources on this. Fortunately I found a couple sample dispatch scripts, one of which I ended up basing mine off of.
But then there was a problem. Inside my app, I use ruby’s CGI class to access CGI form variables and other stuff. Since the FastCGI stuff overrides and partially replaces ruby’s internal CGI class, there’s a problem. Doing “cgi = CGI.new” inside a ruby script that’s being served through FastCGI throws a weird exception. But I wanted to try to retain compatibility for non-FastCGI mode. And I couldn’t figure out how to get the ‘cgi’ variable from the ruby dispatch script into my app’s script, since I was using ‘eval’ to run my script. The dispatch script I saw used some weird Binding voodoo. Up at the top level we have:
def getBinding(cgi, env)
return binding
end
I had no idea what that was doing, so I looked it up. Apparently the built-in “binding” function returns a Binding object that describes the current execution context, including local variables and function/method arguments. Ok, that seems really powerful and cool. So I look down to the sample dispatch script’s eval statement, and I see:
eval File.open(script).read, getBinding(cgi, cgi.env_table)
Ok, so it appears it tries to eval the script while providing an execution context that contains just ‘cgi’ and one of its member vars. I only sorta understand this. So I ditched the “cgi = CGI.new” line in my app’s script. But I got a NameError when just trying to use ‘cgi’. Huh? What’s going on? So I get rid of the getBinding() call entirely, and just let it use the current execution context, and suddenly everything works right. Weird.
Well, sorta. Now, remember, I want to preserve compatibility with running as a normal CGI. So the normal CGI needs to create its own ‘cgi’ object, but the FastCGI one should just use the one from the dispatch script. So I came up with this:
begin
if !cgi.nil?
mycgi = cgi
end
rescue NameError
require 'cgi'
mycgi = CGI.new
end
Ok, that seemed to work ok. After that block, ‘mycgi’ should be usable as a CGI/FCGI::CGI object regardless of which mode it’s running under.
So I play around a bit more, and suddenly notice that my POST requests have stopped working. I dig into it a bit, and realise that my POST requests are actually just fine. What’s happening is that, somehow, the FCGI::CGI object completely ignores $QUERY_STRING on a POST request, while ruby’s normal CGI object will take care of it and merge it with the POST data variables. You see, to make my URLs pretty, I have normal URLs rewritten such that the script sees “page=whatever” in the query string. So when I did a POST, the page= would get lost, and so the POST would end up fetching the root web page rather than the one that should be receiving the form variables. I’m not sure if this is “normal” behavior, or if the version of the fcgi ruby module on DreamHost has a bug. Regardless, we need a workaround. So I go back to my last code snippet, and hack something together:
begin
if !cgi.nil?
if cgi.env_table['REQUEST_METHOD'] == 'POST'
CGI.parse(cgi.env_table['QUERY_STRING']).each do |k,v|
cgi.params[k] = v
end
end
mycgi = cgi
end
rescue NameError
require 'cgi'
mycgi = CGI.new
end
Ick. But at least it works.
So far, I’m liking ruby quite a lot. It’s a beautiful language, and seems well-suited for this kind of work, especially since I want to get something that works up and running relatively quickly.
We’ll see, however, how many more gotchas I run into.
Backlight Change Notification?
Is there a decent (non-polling) way to get notified when a laptop panel’s backlight brightness changes? HAL exports methods to set and get the brightness level, as well as query the number of possible levels, but there doesn’t appear to be a way to get notified if the level changes. Calling org.freedesktop.Hal.LaptopPanel.GetBrightness() every five or ten seconds or so sounds like an awful idea, of course.
I’ve heard plans to use the XBACKLIGHT randr 1.2 property to do backlight setting, but I don’t think any drivers use this yet. Polling /sys is just as bad (why doesn’t sysfs or procfs support inotify, dammit!), and obviously isn’t portable anyway (not that HAL is particularly portable these days either).