Xfce

Subdomains
 

Include custom GTK+ RC style

  • March 8, 2010
  • Mike Massonnet
I've been using a custom GTK+ RC style for the notes plugin since the version 1.4.0, right now it is at version 1.7.2. I have been playing with GTK+ theming again these last two hours, and I've get custom scrollbars, a gradient for the custom-made “title bar”, and better colours for the notebook to get the current tab stand out from the crowd.

While experimenting on a test-case code I found out a better way to parse a gtkrc file in the program. The first time I was fighting with the existing gtk_rc related functions, I gave up on a solution I partially dislike that is to include a line to the custom gtkrc file within ~/.gtkrc-2.0.

Today I understood how gtk_rc_parse(filename) behaves. You have to call this function at the beginning of the program before building any widgets, it will work even if the file doesn't exist yet. Next, while the program is running, you can modify the file, create it, delete it, truncate it, whatever, and call gtk_rc_reparse_all() to get the style refreshed in the GUI. It's hard to believe that such easy things are sometimes a PITA :-)

Be prepared for a 1.7.3 notes plugin with nicer colours.

Show/hide functionality from notification area

  • March 1, 2010
  • Mike Massonnet
When using a status icon within the notification area it is common to use the left-click action to show/hide the main window. Obviously this is often done in different ways. So here is my tip on how to do it right :-)

What I believe to me the most sense-full way is to:
  1. Check if the application is invisible and show it,
  2. Otherwise check if the window is inactive and present it,
  3. Otherwise hide it.
In C language it looks like this:
/* Show the window */
if (!(GTK_WIDGET_VISIBLE(window))) {
gtk_widget_show(window);
}
/* Present the window */
else if (!gtk_window_is_active(GTK_WINDOW(window))) {
gtk_window_present(GTK_WINDOW(window));
}
/* Hide the window */
else {
int winx, winy;
gtk_window_get_position(GTK_WINDOW(window), &winx, &winy);
gtk_widget_hide(window);
gtk_window_move(GTK_WINDOW(window), winx, winy);
}
I have been doing this for quite a long time inside the Xfce Notes plugin, except a little different with multiple windows.

Some remarks, the PendingSealings proposes gtk_widget_get_visible instead of its analogous MACRO. And as you may also notice when the window is hidden it gets moved just after, this is important as otherwise the window would be repositioned by its initial value once shown again (e.g. centre of screen or dynamically by the window manager).

Eatmonkey 0.1.3 benchmarking

  • February 13, 2010
  • Mike Massonnet
Eatmonkey has now been released for the 4th time and I started to use it to download videos from FOSDEM2010 by drag-n-dropping the links from the web page to the manager :-)

I downloaded four files and while they were running I had a close look at top and iftop to monitor the CPU usage and the bandwidth usage between the client/server (the connection between eatmonkey and the aria2 XML-RPC server running on the localhost interface).

I had unexpected results and was surprised by the CPU usage. It is very high currently which means I have a new task for the next milestone, getting the CPU footprint low. The bandwidth comes without surprises, but since the milestone will target performance where possible I will fine down the number of requests made to the server. This problem is also noticeable in the GUI in that it tends to micro-freeze during the updates of each download. So the more active downloads will be running the more the client will be freezing.

Some results as it will speak more than words:
Number of active downloadsReceptionEmissionCPU%
4 downloads144Kbps18Kbps30%
3 downloads108Kbps14Kbps26%
2 downloads73Kbps11Kbps18%

I will start by running benchmarks on the code itself, and thanks to Ruby there is built-in support for Benchmarking and Profiling. It comes with at least three different useful modules: benchmark, profile and profiler. The first measures the time that the code necessitated to be executed on the system. It is useful to measure different kind of loops like for, while or do...while, or for example to see if a string is best to be compared through a dummy compare function or via a compiled regular expression. The second simply needs to be included at the top of a Ruby script and it will print a summary of the time passed within each method/function call. The third does the same except it is possible to run the Profiler around distinctive blocks of code. So much for the presentation, below are some samples.

File benchmark.rb:
#!/usr/bin/ruby -w

require "benchmark"
require "pp"

integers = (1..10000).to_a
pp Benchmark.measure { integers.map { |i| i * i } }

Benchmark.bm(10) do |b|
b.report("simple") { 50000.times { 1 + 2 } }
b.report("complex") { 50000.times { 1 + 2 - 6 + 5 * 4 / 2 + 4 } }
b.report("stupid") { 50000.times { "1".to_i + "3".to_i * "4".to_i - "2".to_i } }
end

words = IO.readlines("/usr/share/dict/words")
Benchmark.bm(10) do |b|
b.report("include") { words.each { |w| next if w.include?("abe") } }
b.report("regexp") { words.each { |w| next if w =~ /abe/ } }
end

File profile.rb:
#!/usr/bin/ruby -w

require "profile"

def factorial(n)
n > 1 ? n * factorial(n - 1) : 1;
end

factorial(627)

File profiler.rb:
#!/usr/bin/ruby -w

require "profiler"

def factorial(n)
(2..n).to_a.inject(1) { |product, i| product * i }
end

Profiler__.start_profile
factorial(627)
Profiler__.stop_profile
Profiler__.print_profile($stdout)
Update: The profiling showed that during a status request 65% of the time is consumed by the XML parser. The REXML class is written 100% in Ruby, and that gives a good hint that the same request done with a parser written in C may present a real boost. On another hand, the requests are now only run once periodically and cached inside the pooler. This means that the emission bitrate is always the same and that the reception bitrate grows as there are more downloads running. And as a side-effect there is less XML parsing done thus less CPU time used.

Eatmonkey 0.1.3 benchmarking

  • February 13, 2010
  • Mike Massonnet
Eatmonkey has now been released for the 4th time and I started to use it to download videos from FOSDEM2010 by drag-n-dropping the links from the web page to the manager :-)

I downloaded four files and while they were running I had a close look at top and iftop to monitor the CPU usage and the bandwidth usage between the client/server (the connection between eatmonkey and the aria2 XML-RPC server running on the localhost interface).

I had unexpected results and was surprised by the CPU usage. It is very high currently which means I have a new task for the next milestone, getting the CPU footprint low. The bandwidth comes without surprises, but since the milestone will target performance where possible I will fine down the number of requests made to the server. This problem is also noticeable in the GUI in that it tends to micro-freeze during the updates of each download. So the more active downloads will be running the more the client will be freezing.

Some results as it will speak more than words:
Number of active downloadsReceptionEmissionCPU%
4 downloads144Kbps18Kbps30%
3 downloads108Kbps14Kbps26%
2 downloads73Kbps11Kbps18%

I will start by running benchmarks on the code itself, and thanks to Ruby there is built-in support for Benchmarking and Profiling. It comes with at least three different useful modules: benchmark, profile and profiler. The first measures the time that the code necessitated to be executed on the system. It is useful to measure different kind of loops like for, while or do...while, or for example to see if a string is best to be compared through a dummy compare function or via a compiled regular expression. The second simply needs to be included at the top of a Ruby script and it will print a summary of the time passed within each method/function call. The third does the same except it is possible to run the Profiler around distinctive blocks of code. So much for the presentation, below are some samples.

File benchmark.rb:
#!/usr/bin/ruby -w

require "benchmark"
require "pp"

integers = (1..10000).to_a
pp Benchmark.measure { integers.map { |i| i * i } }

Benchmark.bm(10) do |b|
b.report("simple") { 50000.times { 1 + 2 } }
b.report("complex") { 50000.times { 1 + 2 - 6 + 5 * 4 / 2 + 4 } }
b.report("stupid") { 50000.times { "1".to_i + "3".to_i * "4".to_i - "2".to_i } }
end

words = IO.readlines("/usr/share/dict/words")
Benchmark.bm(10) do |b|
b.report("include") { words.each { |w| next if w.include?("abe") } }
b.report("regexp") { words.each { |w| next if w =~ /abe/ } }
end

File profile.rb:
#!/usr/bin/ruby -w

require "profile"

def factorial(n)
n > 1 ? n * factorial(n - 1) : 1;
end

factorial(627)

File profiler.rb:
#!/usr/bin/ruby -w

require "profiler"

def factorial(n)
(2..n).to_a.inject(1) { |product, i| product * i }
end

Profiler__.start_profile
factorial(627)
Profiler__.stop_profile
Profiler__.print_profile($stdout)
Update: The profiling showed that during a status request 65% of the time is consumed by the XML parser. The REXML class is written 100% in Ruby, and that gives a good hint that the same request done with a parser written in C may present a real boost. On another hand, the requests are now only run once periodically and cached inside the pooler. This means that the emission bitrate is always the same and that the reception bitrate grows as there are more downloads running. And as a side-effect there is less XML parsing done thus less CPU time used.

Backward compatibility for Ruby 1.8

  • February 6, 2010
  • Mike Massonnet
As I'm currently writing some Ruby code and that I started with version 1.9 I felt onto cases where some methods don't exist for Ruby 1.8. This is very annoying and I started by switching the code to 1.8 method calls. I disliked this when it came to Process.spawn which is a one line call to execute a separate process. Rewriting it takes around 5 lines instead.

So I had the idea to reuse something I already saw once. I write a new file named compat18.rb and include it within the sources that need it. Ruby makes it very easy to add new methods to existing classes/modules anyway, even if they exist already, so I just did it and it works like a charm.

Here is a small snippet:
class Array
        def find_index(idx)
                index(idx)
        end
end

class Dir
        def exists?(path)
                File.directory?(path)
        end
end

Update: It can happen that a fallback method from Ruby 1.8 has been totally dropped and replaced against a new method in 1.9, and in this case the older method has to be checked if it exists, and otherwise make a call to the parent.
class Array
        def count
                if defined? nitems
                        return nitems
                else
                        return super
                end
        end
end

Fed up with Moblin

  • February 4, 2010
  • Mike Massonnet
I slowly begin to be fed up with Moblin, the base installation. The base system starts way too often with core-dumps (crash on mutter f.e. which also means X restarts), but mainly because of RPM. When package-kit starts to check for an update — or when you do any installation/upgrade with yum e.g. you use rpm directly or indirectly — the whole system goes unusable, the browser acts like it is frozen, it takes very long to switch between tasks, and all of this for at least a minute up to an hour if you accept to run an update. You can call this whatever you want, I call this a big fail.

This happens on an Acer Aspire One 9", where I guess they installed the cheapest SSD out there.

In fact things were getting really bad when I switched to an Xfce session, I received unbelievable long startup times. Uxlaunch, the new automatic login application on Moblin 2.1, is totally uncooperative. The Xfce session ends launching many tools and applications twice, two corewatcher-applets, two connman-applets, etc. Uxlaunch will run xfce4-session, but also executes the same desktop files — as it seems after a quick look in the code — from the autostart directory, which is a role taken by the Xfce session manager.

So I have been looking around to finally throw away some junk.

Now I have been looking close at the autostart applications since the "all-in-twice" fiasco to get this netbook fast again. Of course you have to know what you do, this kind of tasks isn't open to people without technical skills. First I changed the default "desktop" to Openbox, by downloading the RPM source package, compiling it and putting it inside the uxlaunch configuration file. Then I have been removing some base packages and manually hiding some desktop files to avoid them to autostart — I have been playing with the Hidden/NoDisplay key but it didn't have any effect on uxlaunch so it ended with a chmod 000 command.

I dropped four packages, kerneloops, corewatcher and obexd/openobex. I really don't want them around anymore. And I "dropped" seven autostart files, ofono which depends on a lot of applications, the bkl-orbiter, and the rest are Moblin panel related applications, bluetooth-panel which I don't even have on this netbook, carrick-panel as I use connman-applet which works at least for an automatic connection, two dalson applications dalson-power-applet and dalston-volume-applet, and at last moblin-panel-web.

I kept the gnome-settings-daemon although I have the Xfce settings daemon installed which I do prefer at some extends. And after all this I changed the GTK+ and icon themes through the gconf keys. And what's the conclusion? Moblin is nice, but I managed to munch it and enforce my desktop.

Update: After running under OpenBox I feel that my remark toward RPM is wrong, I don't know maybe it is the mixed use of OpenGL that makes the tasks taking ages to react. All in all, the default desktop environment is something where you must know about patience :-)

The download manager is in the wild

  • January 24, 2010
  • Mike Massonnet
So it's finally done, it took very long, but it's done. The download manager I once had in mind is taking off into the wildness :-) Of course it took long because I never did something with it, writing a front-end to wget/curl isn't interesting -- who cares about downloading HTTP/FTP files when the web browser handles it for you anyway -- and reusing GVFS doesn't make sense cause really you don't want to download from your trash:// or whatever proto:// and again only HTTP/FTP is not interesting. Not at all. I have come across Uget and other very good projects but most of them are either writing the code to handle the protocol like HTTP and/or are looking forward to handle more interesting protocols like BitTorrent. I think it's a very tough job that demands too much for a one-maintainer project. Recently I saw the new release of aria2 that comes with an XML-RPC interface and this took all my interest during 4 days. I believe this utility is very promising and I had really like to write the good and user-friendly XML-RPC GUI client that it seems to be missing!

What is so exciting about aria2? In case you know the project you don't have to read, but it is worth mentionning the features of this small utility. It supports HTTP(s)/FTP but also BitTorrent and Metalinks. It is widely customizable for each specific protocol. It can download one file by splitting it into several pieces and using multiple connections and even mix HTTP URIs with BitTorrent and by the same time upload to BitTorrent peers what has been downloaded through HTTP. So this has to be the perfect candidate to write a nice download manager, hasn't it?

The client is a very first version that I intended to code name draft although the release assistant on xfce.org doesn't allow this. Instead it will take the more neutral road of 0.1.0 to 0.1.1 etc until 0.2.0 followed by stable fix releases.

Why draft? Simple. It's being written with a higher level language than C but not even Vala :-) High-level languages are a great deal when starting a new application, as you can type more and get more, instead of typing like a dog for a rocking hot, well lousy, window. Since I do like Ruby, it's being written in Ruby currently, and it depends on the ruby-gnome2 project for the bindings. To get a picture, a main file to open a window takes 3 lines. Of course the final version is meant to be written in Vala/C, but I still need to convince myself that Vala+libsoup isn't an option that is going to waste too much time. Also at first glance libsoup looks easy to use, it allows to build XML-RPC requests, to request the HTTP bodies and to send messages, but it is not an XML-RPC client and you never know how well the Vala bindings will play. This means extra attention for small things. Starting an application from scratch with such constraints are usually a big time-killer therefore using like in this case an existing XML-RPC client is very important. The GUI is done with Glade in GtkBuilder format and reusing it into a new language will be pretty easy.


So what's next? I'll just wait for some feedback see what the audience thinks about it, if at all, and polish here and there. Keep tuned for the next update.

Messing up with Vala (again)

  • December 23, 2009
  • Mike Massonnet
First some good news. I didn't look close enough into the possibilities offered by Automake 1.11 when I first wrote the post about building Vala projects. Automake 1.11 is all about making releases without the end-users having to compile Vala! Just like it is written in the Automake documentation. From now on I will always apply this wherever it is possible.

I updated the Xfce4 Vala bindings with libraries from the 4.7 stack. In there I have updated the panel plugin example, and as you can see the Automake file is extremely short. When there is a SOURCES defined with a Vala file, Automake will create targets for each compiled program or library with Vala compilation, and generate one vala.stamp file per target. This has its pros and cons. In the case of the Notes plugin, this disallowed me to have a mix of only C written software and Vala inside the same directory. In reality I used to have a single main file for the panel plugin to compile to C either for the 4.7 version or prior. Automake makes the Vala specific targets visible outside the scope of the "if PANEL47 ... else ... end" block. I ended up with self-compiled Vala for each target in maintainer mode only, as previously, which is a small overhead for the specific targets.

Other nice thing about Vala is that bindings are just files. I compiled the Notes plugin for the Xfce 4.6 panel on my netbook just to verify everything is alright but unfortunately there were some problems. I bumped the required version of Vala to 0.7.8 which has GTK+ bindings for 2.18 already while I only have GTK+ 2.16 available. The simple thing to do was to download the GTK+ bindings from the version of Vala I used previously and copy them into a location of the project (or system wide).  As long as the Vala compiler knows where to pick them up (with "--vapidir=") it will choose them and not the ones provided by default. This makes it awesomely easy to provide customized bindings for example.

Vala can always be very time consuming, but I still like it! Just like git merge by the way.

Build a project with Vala

  • September 2, 2009
  • Mike Massonnet
This post is about using Vala in a project but in the end provide the C code for the releases. I think that this is very essential and that releasing source code to be build from Vala is wrong. Vala will always rewrite the code to GObjects in C, but has already proven that compiling the same code from two different versions of Vala will fail. So when you are doing releases with Vala you will break your releases sooner or later. Another good point is when the Vala code is compiled on top of patched vapi files, doing only C compilation with the releases will drop the requirement to apply them.

I'll take as example the Autotools, if you are using a different tool-chain you can surely adapt it. The idea is simple, the Vala sources are only compiled in maintainer mode. When you compile the application from the development branch you will usually have a script called autogen.sh to build the configure script that will automatically be executed with the parameter --enable-maintainer-mode. When providing the distribution tarball that is created with make distcheck, the configure script will not be run with that parameter (except if specified by hand) and the source files to build from will be filled in with the C filenames.

The example below is very generic and can be copy/pasted but should be adapted.

Autoconf script

1. The initialization of Automake and the maintainer mode in the autoconf script. The Automake version is checked for 1.11 which is the first version that comes with Vala support. The extra dist-bzip2 argument is there to provide an extra bzipped distribution tarball as you guessed it.
AM_INIT_AUTOMAKE([1.11 dist-bzip2])
AM_MAINTAINER_MODE()
2. The check for Vala only on maintainer mode. The AM_PROG_VALAC defines the variable VALAC that can be reused inside the Makefile.am files and accepts an optional version check.
if test "x$USE_MAINTAINER_MODE" = "xyes" ; then
AM_PROG_VALAC([0.7.4])
if test "x$VALAC" = "x" ; then
AC_MSG_ERROR([Cannot find the "valac" compiler in your PATH])
fi
fi
3. It is possible to sum up the build configuration at the end of the autoconf script.
echo
echo "Build Configuration:"
echo
echo "* Maintainer Mode: $USE_MAINTAINER_MODE"
if test "x$USE_MAINTAINER_MODE" = "xyes" ; then
echo
echo " * Vala: $VALAC"
echo
fi

Automake script

1. The declaration of the Vala sources and their respective compiled C sources.
product_VALASOURCES = \
obj1.vala \
obj2.vala \
main.vala

product_VALABUILTSOURCES = $(product_VALASOURCES:.vala=.c) product.h
2. Use the special BUILT_SOURCES variable to build given targets before running a dist with e.g. make distcheck. This usually done in maintainer mode, as in this case to be sure the releases won't have anything to do with Vala.
if MAINTAINER_MODE
PACKAGES = --pkg=gtk+-2.0
BUILT_SOURCES = vala.stamp
vala.stamp: $(product_VALASOURCES)
$(VALAC) --vapidir=$(srcdir) $(PACKAGES) $^ -C -H product.h
touch $@
endif
3. The final sources for the product are filled in with the generated Vala sources. The Vala sources are not passed to any SOURCES which is why they are passed to the special EXTRA_DIST variable.
product_SOURCES = \
random-source.c \
random-header.h \
$(product_VALABUILTSOURCES)

EXTRA_DIST = $(product_VALASOURCES)
if MAINTAINER_MODE
CLEANFILES = \
$(BUILT_SOURCES) \
$(product_VALABUILTSOURCES)
endif

That's it

There are many existent Vala projects nowadays from where you can pick up new ideas, and this post is just an example amongst many others. The full example is available in the xfce4-vala bindings.

Update: I corrected some mistakes seen in green in the script portions. If VALAC is unset the configure script must quit otherwise the resulting Makefiles will have empty commands instead of /usr/bin/valac. Also the generated header file must be passed to product_VALABUILTSOURCES otherwise it would have been left out from distributions as it wans't passed to any product_SOURCES nor EXTRA_DIST variables.

Notes, notebook, tabs

  • July 12, 2009
  • Mike Massonnet
The notes plugin features a notebook since the port to Xfce 4.4 and in the last release I hide the tabs and gave a try to a new navigation bar. Seems you like the tabs so no worries it will be back in the next release.

Update: Revision 7717 has an option to show the tabs.