Xfce

Subdomains
 

Git Weirdness, Part 2

  • March 18, 2009
  • Brian Tarricone

Ok, now this is just ridiculous:

[brian@machine1 airconfig $] pwd
/home/brian/src/airconfig
[brian@machine1 airconfig $] git branch -a
  advanced-ip-settings
* master
  nm-frontend
  notification-rework
  reconnect
  origin/master
  origin/pre-hal
[brian@machine1 airconfig $] cd .. && mkdir t && cd t
[brian@machine1 t $] git clone ../airconfig
Initialized empty Git repository in /home/brian/src/t/airconfig/.git/
[brian@machine1 t $] cd airconfig
[brian@machine1 airconfig $] git branch -a
* master
  origin/HEAD
  origin/advanced-ip-settings
  origin/master
  origin/nm-frontend
  origin/notification-rework
  origin/reconnect

Ok, that makes sense! Now:

[brian@machine1 airconfig $] cd ..
[brian@machine1 t $] rm -rf airconfig
[brian@machine1 t $]  git clone file:///home/brian/src/airconfig
Initialized empty Git repository in /home/brian/src/t/airconfig/.git/
remote: Counting objects: 1272, done.
remote: Compressing objects: 100% (486/486), done.
remote: Total 1272 (delta 778), reused 1270 (delta 776)
Receiving objects: 100% (1272/1272), 360.40 KiB, done.
Resolving deltas: 100% (778/778), done.
[brian@machine1 t $] cd airconfig
[brian@machine1 airconfig $] git branch -a
* master
  origin/HEAD
  origin/advanced-ip-settings
  origin/master
  origin/pre-hal

What. The. Fuck.

Yes, the git-clone man page tells me that, when using a local pathname (and not using a file: URI), it assumes the other repo is local and uses hardlinks between the repos. But hey, if I specify –no-hardlinks when cloning, I still get all branches if I use the “../airconfig” method.

And if I go to machine2 and do “git clone machine1:src/airconfig” I get the same broken result with the file:// method.

Git, I think you rock. But why does your user interface suck so much? And why do you just appear to be broken right now? I can’t seem to find anything else in the man page to help here. (I’m using git 1.6.1.3 if that matters.)

Git Weirdness, Part 2

  • March 18, 2009
  • Brian Tarricone

Ok, now this is just ridiculous:

[brian@machine1 airconfig $] pwd
/home/brian/src/airconfig
[brian@machine1 airconfig $] git branch -a
  advanced-ip-settings
* master
  nm-frontend
  notification-rework
  reconnect
  origin/master
  origin/pre-hal
[brian@machine1 airconfig $] cd .. && mkdir t && cd t
[brian@machine1 t $] git clone ../airconfig
Initialized empty Git repository in /home/brian/src/t/airconfig/.git/
[brian@machine1 t $] cd airconfig
[brian@machine1 airconfig $] git branch -a
* master
  origin/HEAD
  origin/advanced-ip-settings
  origin/master
  origin/nm-frontend
  origin/notification-rework
  origin/reconnect

Ok, that makes sense! Now:

[brian@machine1 airconfig $] cd ..
[brian@machine1 t $] rm -rf airconfig
[brian@machine1 t $]  git clone file:///home/brian/src/airconfig
Initialized empty Git repository in /home/brian/src/t/airconfig/.git/
remote: Counting objects: 1272, done.
remote: Compressing objects: 100% (486/486), done.
remote: Total 1272 (delta 778), reused 1270 (delta 776)
Receiving objects: 100% (1272/1272), 360.40 KiB, done.
Resolving deltas: 100% (778/778), done.
[brian@machine1 t $] cd airconfig
[brian@machine1 airconfig $] git branch -a
* master
  origin/HEAD
  origin/advanced-ip-settings
  origin/master
  origin/pre-hal

What. The. Fuck.

Yes, the git-clone man page tells me that, when using a local pathname (and not using a file: URI), it assumes the other repo is local and uses hardlinks between the repos. But hey, if I specify –no-hardlinks when cloning, I still get all branches if I use the “../airconfig” method.

And if I go to machine2 and do “git clone machine1:src/airconfig” I get the same broken result with the file:// method.

Git, I think you rock. But why does your user interface suck so much? And why do you just appear to be broken right now? I can’t seem to find anything else in the man page to help here. (I’m using git 1.6.1.3 if that matters.)

Git Weirdness

  • March 17, 2009
  • Brian Tarricone

So I have a git repo on my machine at home, that’s cloned from a repo on git.xfce.org. I was doing some work in it the other day — in a private branch, not published to git.xfce.org — and I wanted to continue working on the private branch from another machine. But… it won’t work. Let’s say ‘machine1′ has the repo with the private branch, and ‘machine2′ is where I want to work today.

So, on machine2, I cloned from the master repo on git.xfce.org:

[brian@machine2 src $] git clone git@git.xfce.org:kelnos/airconfig
[... stuff happens...]
[brian@machine2 src $] cd airconfig
[brian@machine2 airconfig $] git branch -a
* master
  origin/master
  origin/pre-hal

Ok, cool, that’s what I expect. So I ssh over to machine1 (the one I eventually want to pull from), and I check out my list of branches there:

[brian@machine1 airconfig $] git branch -a
  advanced-ip-settings
  master
* nm-frontend
  notification-rework
  reconnect
  origin/master
  origin/pre-hal

Ok, cool. the ‘nm-frontend’ branch is the one I want to pull to machine2. So on machine2, I do this:

[brian@machine2 airconfig $] git remote add machine1 machine1:src/airconfig
[brian@machine2 airconfig $] git pull machine1 nm-frontend
fatal: Couldn't find remote ref nm-frontend
fatal: The remote end hung up unexpectedly

Uh… what? Do I have the wrong syntax? Ok, let me just try to pull everything from the remote:

[brian@machine2 airconfig $] git pull machine1
remote: Counting objects: 1085, done.
remote: Compressing objects: 100% (301/301), done.
remote: Total 1085 (delta 774), reused 1085 (delta 774)
Receiving objects: 100% (1085/1085), 323.43 KiB | 14 KiB/s, done.
Resolving deltas: 100% (774/774), done.
From machine1:src/airconfig
 * [new branch]      advanced-ip-settings -> machine1/advanced-ip-settings
 * [new branch]      master     -> machine1/master
 * [new branch]      pre-hal    -> machine1/pre-hal

And then at the bottom it prints out a message about not knowing which local branches to merge stuff into. That’s fine, no big deal. But… how come it pulled 3 of my local branches on machine1, but left off 2 of them (‘notification-rework’ and ‘nm-frontend’). No combination of src:dest refspecs seem to do the trick. Pulling one of the 3 branches it seems to like using the syntax I used above seems to work fine, but it can’t see the one I want. What am I doing wrong…?

Git Weirdness

  • March 17, 2009
  • Brian Tarricone

So I have a git repo on my machine at home, that’s cloned from a repo on git.xfce.org. I was doing some work in it the other day – in a private branch, not published to git.xfce.org – and I wanted to continue working on the private branch from another machine. But… it won’t work. Let’s say ‘machine1’ has the repo with the private branch, and ‘machine2’ is where I want to work today.

So, on machine2, I cloned from the master repo on git.xfce.org:

[brian@machine2 src $] git clone git@git.xfce.org:kelnos/airconfig
[... stuff happens...]
[brian@machine2 src $] cd airconfig
[brian@machine2 airconfig $] git branch -a
* master
  origin/master
  origin/pre-hal

Ok, cool, that’s what I expect. So I ssh over to machine1 (the one I eventually want to pull from), and I check out my list of branches there:

[brian@machine1 airconfig $] git branch -a
  advanced-ip-settings
  master
* nm-frontend
  notification-rework
  reconnect
  origin/master
  origin/pre-hal

Ok, cool. the ‘nm-frontend’ branch is the one I want to pull to machine2. So on machine2, I do this:

[brian@machine2 airconfig $] git remote add machine1 machine1:src/airconfig
[brian@machine2 airconfig $] git pull machine1 nm-frontend
fatal: Couldn't find remote ref nm-frontend
fatal: The remote end hung up unexpectedly

Uh… what? Do I have the wrong syntax? Ok, let me just try to pull everything from the remote:

[brian@machine2 airconfig $] git pull machine1
remote: Counting objects: 1085, done.
remote: Compressing objects: 100% (301/301), done.
remote: Total 1085 (delta 774), reused 1085 (delta 774)
Receiving objects: 100% (1085/1085), 323.43 KiB | 14 KiB/s, done.
Resolving deltas: 100% (774/774), done.
From machine1:src/airconfig
 * [new branch]      advanced-ip-settings -> machine1/advanced-ip-settings
 * [new branch]      master     -> machine1/master
 * [new branch]      pre-hal    -> machine1/pre-hal

And then at the bottom it prints out a message about not knowing which local branches to merge stuff into. That’s fine, no big deal. But… how come it pulled 3 of my local branches on machine1, but left off 2 of them (‘notification-rework’ and ‘nm-frontend’). No combination of src:dest refspecs seem to do the trick. Pulling one of the 3 branches it seems to like using the syntax I used above seems to work fine, but it can’t see the one I want. What am I doing wrong…?

Ruby for Web Development

  • February 12, 2009
  • Brian Tarricone

I recently started a new web dev project, and decided to use it to better learn Ruby. However, I don’t want to use Rails. I’d like to keep it simple. I also prefer to know a lot about the inner workings of a particular technology before I go and use a large framework that hides a bunch of details from me.

However, I want to use ActiveRecord. ORM seems to be all the rage these days, promising to abstract the annoying details of database access behind OO natural for your chosen language. It also helps avoid common errors and pitfalls with regard to constructing SQL queries and the like.

So, ActiveRecord. I install it on my laptop with “emerge ruby-activerecord”, and there I go. One “require ‘activerecord’” in my script later, and, awesome, it starts working.

Then I start working on my web host (DreamHost, if you’re wondering). It can’t find ActiveRecord. But it’s obviously installed, because I know DH supports Rails out of the box, and I don’t think you can have an install of Rails without ActiveRecord. So I poke, and then realise it might be installed as a Ruby “gem.” Ok, so I put a “require ‘rubygems’” above my activerecord require. Nice, now it works. Then I think, well, what if I put this somewhere that doesn’t require rubygems? Not hard to work around automatically:

begin
  require 'activerecord'
rescue LoadError
  require 'rubygems'
  require 'activerecord'
end

Nice, ok, that works. It’s probably a foolish micro-optimisation, but whatever.

Then I notice… ugh, this is super slow. Even on the web host, it can take a good two seconds for the “require ‘activerecord’” statement to execute. Yeah, I know, ruby is kinda slow. But 2 extra seconds each time someone hits basically any page of the website? Ugh.

So…. FastCGI. I know DH supports it, so I head over to the control panel and enable it for the domain I’m working on, and start googling around to figure out how to use FastCGI in a ruby script.

Unfortunately, there aren’t too many resources on this. Fortunately I found a couple sample dispatch scripts, one of which I ended up basing mine off of.

But then there was a problem. Inside my app, I use ruby’s CGI class to access CGI form variables and other stuff. Since the FastCGI stuff overrides and partially replaces ruby’s internal CGI class, there’s a problem. Doing “cgi = CGI.new” inside a ruby script that’s being served through FastCGI throws a weird exception. But I wanted to try to retain compatibility for non-FastCGI mode. And I couldn’t figure out how to get the ‘cgi’ variable from the ruby dispatch script into my app’s script, since I was using ‘eval’ to run my script. The dispatch script I saw used some weird Binding voodoo. Up at the top level we have:

def getBinding(cgi, env)
  return binding
end

I had no idea what that was doing, so I looked it up. Apparently the built-in “binding” function returns a Binding object that describes the current execution context, including local variables and function/method arguments. Ok, that seems really powerful and cool. So I look down to the sample dispatch script’s eval statement, and I see:

eval File.open(script).read, getBinding(cgi, cgi.env_table)

Ok, so it appears it tries to eval the script while providing an execution context that contains just ‘cgi’ and one of its member vars. I only sorta understand this. So I ditched the “cgi = CGI.new” line in my app’s script. But I got a NameError when just trying to use ‘cgi’. Huh? What’s going on? So I get rid of the getBinding() call entirely, and just let it use the current execution context, and suddenly everything works right. Weird.

Well, sorta. Now, remember, I want to preserve compatibility with running as a normal CGI. So the normal CGI needs to create its own ‘cgi’ object, but the FastCGI one should just use the one from the dispatch script. So I came up with this:

begin
  if !cgi.nil?
    mycgi = cgi
  end
rescue NameError
  require 'cgi'
  mycgi = CGI.new
end

Ok, that seemed to work ok. After that block, ‘mycgi’ should be usable as a CGI/FCGI::CGI object regardless of which mode it’s running under.

So I play around a bit more, and suddenly notice that my POST requests have stopped working. I dig into it a bit, and realise that my POST requests are actually just fine. What’s happening is that, somehow, the FCGI::CGI object completely ignores $QUERY_STRING on a POST request, while ruby’s normal CGI object will take care of it and merge it with the POST data variables. You see, to make my URLs pretty, I have normal URLs rewritten such that the script sees “page=whatever” in the query string. So when I did a POST, the page= would get lost, and so the POST would end up fetching the root web page rather than the one that should be receiving the form variables. I’m not sure if this is “normal” behavior, or if the version of the fcgi ruby module on DreamHost has a bug. Regardless, we need a workaround. So I go back to my last code snippet, and hack something together:

begin
  if !cgi.nil?
    if cgi.env_table['REQUEST_METHOD'] == 'POST'
      CGI.parse(cgi.env_table['QUERY_STRING']).each do |k,v|
        cgi.params[k] = v
      end
    end
    mycgi = cgi
  end
rescue NameError
  require 'cgi'
  mycgi = CGI.new
end

Ick. But at least it works.

So far, I’m liking ruby quite a lot. It’s a beautiful language, and seems well-suited for this kind of work, especially since I want to get something that works up and running relatively quickly.

We’ll see, however, how many more gotchas I run into.

Ruby for Web Development

  • February 12, 2009
  • Brian Tarricone

I recently started a new web dev project, and decided to use it to better learn Ruby. However, I don’t want to use Rails. I’d like to keep it simple. I also prefer to know a lot about the inner workings of a particular technology before I go and use a large framework that hides a bunch of details from me.

However, I want to use ActiveRecord. ORM seems to be all the rage these days, promising to abstract the annoying details of database access behind OO natural for your chosen language. It also helps avoid common errors and pitfalls with regard to constructing SQL queries and the like.

So, ActiveRecord. I install it on my laptop with “emerge ruby-activerecord”, and there I go. One “require ‘activerecord’” in my script later, and, awesome, it starts working.

Then I start working on my web host (DreamHost, if you’re wondering). It can’t find ActiveRecord. But it’s obviously installed, because I know DH supports Rails out of the box, and I don’t think you can have an install of Rails without ActiveRecord. So I poke, and then realise it might be installed as a Ruby “gem.” Ok, so I put a “require ‘rubygems’” above my activerecord require. Nice, now it works. Then I think, well, what if I put this somewhere that doesn’t require rubygems? Not hard to work around automatically:

``begin
  require 'activerecord'
rescue LoadError
  require 'rubygems'
  require 'activerecord'
end``

Nice, ok, that works. It’s probably a foolish micro-optimisation, but whatever.

Then I notice… ugh, this is super slow. Even on the web host, it can take a good two seconds for the “require ‘activerecord’” statement to execute. Yeah, I know, ruby is kinda slow. But 2 extra seconds each time someone hits basically any page of the website? Ugh.

So…. FastCGI. I know DH supports it, so I head over to the control panel and enable it for the domain I’m working on, and start googling around to figure out how to use FastCGI in a ruby script.

Unfortunately, there aren’t too many resources on this. Fortunately I found a couple sample dispatch scripts, one of which I ended up basing mine off of.

But then there was a problem. Inside my app, I use ruby’s CGI class to access CGI form variables and other stuff. Since the FastCGI stuff overrides and partially replaces ruby’s internal CGI class, there’s a problem. Doing “cgi = CGI.new” inside a ruby script that’s being served through FastCGI throws a weird exception. But I wanted to try to retain compatibility for non-FastCGI mode. And I couldn’t figure out how to get the ‘cgi’ variable from the ruby dispatch script into my app’s script, since I was using ‘eval’ to run my script. The dispatch script I saw used some weird Binding voodoo. Up at the top level we have:

``def getBinding(cgi, env)
  return binding
end``

I had no idea what that was doing, so I looked it up. Apparently the built-in “binding” function returns a Binding object that describes the current execution context, including local variables and function/method arguments. Ok, that seems really powerful and cool. So I look down to the sample dispatch script’s eval statement, and I see:

``eval File.open(script).read, getBinding(cgi, cgi.env_table)``

Ok, so it appears it tries to eval the script while providing an execution context that contains just ‘cgi’ and one of its member vars. I only sorta understand this. So I ditched the “cgi = CGI.new” line in my app’s script. But I got a NameError when just trying to use ‘cgi’. Huh? What’s going on? So I get rid of the getBinding() call entirely, and just let it use the current execution context, and suddenly everything works right. Weird.

Well, sorta. Now, remember, I want to preserve compatibility with running as a normal CGI. So the normal CGI needs to create its own ‘cgi’ object, but the FastCGI one should just use the one from the dispatch script. So I came up with this:

``begin
  if !cgi.nil?
    mycgi = cgi
  end
rescue NameError
  require 'cgi'
  mycgi = CGI.new
end``

Ok, that seemed to work ok. After that block, ‘mycgi’ should be usable as a CGI/FCGI::CGI object regardless of which mode it’s running under.

So I play around a bit more, and suddenly notice that my POST requests have stopped working. I dig into it a bit, and realise that my POST requests are actually just fine. What’s happening is that, somehow, the FCGI::CGI object completely ignores $QUERY_STRING on a POST request, while ruby’s normal CGI object will take care of it and merge it with the POST data variables. You see, to make my URLs pretty, I have normal URLs rewritten such that the script sees “page=whatever” in the query string. So when I did a POST, the page= would get lost, and so the POST would end up fetching the root web page rather than the one that should be receiving the form variables. I’m not sure if this is “normal” behavior, or if the version of the fcgi ruby module on DreamHost has a bug. Regardless, we need a workaround. So I go back to my last code snippet, and hack something together:

``begin
  if !cgi.nil?
    if cgi.env_table['REQUEST_METHOD'] == 'POST'
      CGI.parse(cgi.env_table['QUERY_STRING']).each do |k,v|
        cgi.params[k] = v
      end
    end
    mycgi = cgi
  end
rescue NameError
  require 'cgi'
  mycgi = CGI.new
end``

Ick. But at least it works.

So far, I’m liking ruby quite a lot. It’s a beautiful language, and seems well-suited for this kind of work, especially since I want to get something that works up and running relatively quickly.

We’ll see, however, how many more gotchas I run into.

Licensing Suckage

  • December 11, 2008
  • Brian Tarricone

I just got an email from a developer who works on the nifty cairo-dock application, pointing me to a thread about licensing issues.

A bunch of months ago, he’d emailed me asking about how to best use code from my Xfce Mailwatch Plugin in cairo-dock to add mail-checking capabilities. At the time, I was pretty stoked that someone else had actually found my code useful enough to incorporate into their program, and offered my encouragement.

Sadly, though, licensing ugliness has reared its… well… ugly… head.

When licensing code under the terms of the GNU GPL or LGPL, the FSF suggests (and most people follow) that you license under “or (at your option) any later version” terms, which means that, while you initially license the code under the version of the GPL or LGPL of your choice, someone can later take your code and relicense it under the terms of a later version of the same license. This also makes the code automatically compatible with future versions of the license.

You might think this sounds pretty good for convenience and licensing compatibility, and you’d probably be right.

However, this isn’t so great from a philosophical perspective, at least from my philosophical perspective. The problem I have is this:

Licensing a work under “GPL version 2 or later” terms means that I am implicitly agreeing with any new restrictions that the FSF dreams up (or any existing restrictions the FSF wants to drop), forever. I’d basically be saying that I agree with something that doesn’t exist yet, and could take any shape or form imaginable.

Don’t get me wrong: in general, I think the FSF is good people, and I agree with their message for the most part.

But I don’t know them, personally, and I don’t agree with them 100%. And I don’t know who’s going to be running the FSF next year, or in five years, or in 20 years. So how can I know, or even have reasonable belief, that their philosophies and values will align with mine such that I’ll agree with future versions of their licenses? There are already parts of version 3 of the GPL and LGPL that I don’t completely understand or agree with, so why should I expect that versions 4, 5, or 10 will be completely to my liking?

The short answer is: I can’t.

And so, for the most part, I release my software under “GPL version 2 only” terms. (Because I’m a bit lazy and don’t want to make a big stink, I’ll release code under “or any later version” terms if I’m contributing to an existing code base that uses those terms.)

it really pained me to have to answer that email saying that my code’s licensing (GPLv2-only) wasn’t compatible with theirs (GPLv3-or-later), but it’s the truth, and there’s not much I can (or want to) do about it.

The only solution I can think of (I’m not a lawyer, of course) that allows them to use my code is that they relicense their code under GPLv2-or-later terms. Of course, then they lose any restrictions that the GPLv3 has over the GPLv2, which I assume they’d prefer to have, since that’s how they’ve licensed their code.

(Before anyone says it, another possible solution would be for me to relicense under LGPLv2.1. The problem with that is one I’ve discussed before: section 3 of the LGPLv2.1 explicitly allows a recipient of the code to relicense the code under regular GPLv#-or-later terms, regardless of the only/or-later status of the original LGPL licensing. This of course completely defeats the intent of my rationale above.)

And so, the OSS licensing mess has caused yet more pain to people who just want to share code and avoid duplicating effort. I love the GPL. I really do. But I also hate it.

Signals

  • September 2, 2008
  • Brian Tarricone

I meant to write about this a while ago, but I forgot, and it just popped into my head for some reason.

If you’re ever using POSIX signals as a means of primitive IPC, and SIGUSR1 and SIGUSR2 aren’t enough for you, never, ever, EVER make use of SIGRTMIN and/or SIGRTMIN plus some offset. Always use SIGRTMAX and SIGRTMAX minus some offset.

Why?

(Disclaimer: this might only be a problem on Linux, but if you want your app to be portable, blah blah blah…) Depending on what C library you’re using, and what pthreads implementation you’re using, the actual numerical value of SIGRTMIN may not be the same in different applications, depending on – get this – whether or not the app links with libpthread. In my case, the pthread impl makes use of the first 3 SIGRT slots, and so when you use the SIGRTMIN macro, you actually call __libc_current_sigrtmin(), and you get a number that’s 3 higher than what you get when you use SIGRTMIN in an app that doesn’t link against libpthread.

Fortunately, SIGRTMAX (which actually expands to a call to __libc_current_sigrtmax()) seems to be a bit more stable. That is, even if SIGRTMIN gets shifted up 3 slots, SIGRTMAX is still the same.

So, the moral of the story is: I never want to see the SIGRTMIN macro ever appear in your code, unless you really know what you’re doing. Instead, use things like SIGRTMAX, SIGRTMAX-4, etc. It may just save you 4 hours of debugging.