I wonderPosted by Mike Kristoffersen 28 Aug, 2009 02:35:08
Why are you reading this blog post? - Is it because you know that it's the best use of your time, because you hope it will be entertaining or do you have to read it to see if it contains important information?
I hope that this particular post holds a view on an issue that others might find interesting, thats why I'm writing it, but whether that information is interesting to especially you, my dear reader, as an individual, that is another question, and one that I can't answer and is one of the topics I will try to cover.
My point is that with this blog as with the many others on the net the content and relevance varies a lot - I sometimes write a blog post if I get a link error when building Firefox and tell how I resolved it - I can see on the search strings that hits the blog and the feedback I get that some people are actually being helped by this. But does that make it relevant enough to make noise of it on planet.mozilla.org
Actually I blog'ed for a while without attempting to be on planet mozilla with it, because I felt it was irrelevant to most of the readers - I mean, what is the likelihood that you have a link error, and look at planet mozilla trying to resolve it? - you are much more likely to hit google with the error message and try to solve the question in that way.
If you look at this blog post, what is the chance that you google to learn about my opinion on how information is flowing in Mozilla? - pretty low I guess... as... I mean, why would you search for such a thing? My point being that where the link error is clearly something you search for (or pull) when you have the need, a post like this needs to be pushed more or less direct to reach its intended target audience.
If I were a close friend to every human in the universe I would of cause know who would be interested in this blog entry, and I could send them a mail with a link and ask them to have a look - but to be honest, I don't even know every human in the Mozilla community, and I would never be able to know the exact audience that would gain anything from reading these lines.
So we have RSS feeds and planet mozilla to give people an option to filter and read for them selves what they find interesting, which I guess is kind of better than nothing - but not ideal, as there is just too much information of limited relevance. (Not in the sense that most posts are bad or irrelevant in them self, the vast majority clearly have a purpose, have relevance and many are well written (and illustrated) - it's just that not everything has the same relevance for everyone.)
What I think could be beneficial would be an edited version of planet mozilla, where the individual posts were grouped and prioritized like in a newspaper style, with "hot" and community wide important stuff (like a new firefox release) on the front page described by a few lines - it should have a nice layout with new important stuff at the top, a short description and then a link to the real "content" (really, look at online newspapers, and you'll know what I'm talking about). Related posts should link to one another - might even have a "other people who looked at this post also looked at" like some Internet shops or auction sites have.
Yes, this would mean that there would be an entity whose function it would be to judge importance of one post to the other - I don't see anything wrong with that, if I lost trust in the judging entity I could just switch to a native feed of "latest" post giving me what planet mozilla is today - I'm not talking about an entity that would be able to censor anything but illegal stuff - the task should be to group, organize and prioritize.
Who knows there might even be someone out there that could automate this, so articles... sorry... blog posts would be prioritized after how many times they were shown, and it was the readers who grouped the items, by checking flags or something...
The whole idea would be to make it more easy, for the majority of readers, at a glance to see if there is something important going on in the community.
If we imagine such a system being build, it might even be possible to extend it with what we have in the news-groups today. As an example, how many people know that we now have a common coding-standard, across modules in Mozilla - that all new code must have a 2 space indent, and a max line with of 80? (with the usual, keep the style in the file you are editing, but if you re-factor or create new stuff, it is no longer up to the module to decide the indentation).
The setting of the indentation was hidden inside a thread somewhere in the newsgroups with a title that indicated it was about whether or not we should have a common coding standard. Now assume I didn't care about coding standards - why would I do anything with that thread than mark it as read? - how would I know that we have a common coding standard, and as important how would I know what it was?
I'm far from reading all newsgroup post or blog posts that are being made in the Mozilla world - I'm at best skimming the titles if something catches my eyes, but I don't have time to read all of it, nor keeping an eye out for discussions on IRC - I don't have time to do that if I'm going to do some actual work too. But if we had a page I could go to in the morning, and see what topics and discussion that were hot, if I could click on "development news", if it was in blinking red that the coding standard was (about) to be changed I think we would all save time, and more importantly generally be more up to date with the stuff that was important for us as individuals in the community as there would be a single entry point into the information.
Why did I reboot?Posted by Mike Kristoffersen 03 Aug, 2009 16:51:58
If you get this error when trying to build Fennec/Firefox for PC on a Ubuntu/Linux platform, then it can be fixed by installing the alsa library... - What might be nice to know is that the name of that library is "libasound2-dev" so:~$ sudo apt-get install libasound2-dev
(Or you can use the Synaptic package manager to install the same package if you prefer a GUI tool)
If you get an error like:configure: error: Couldn't find curl/curl.h which is required for the crash reporter.
Then the library to install is called libcurl4-gnutls-dev (at least that was the library that fixed it for me, it might have been one of the other libraries that was draged in by this one that did the trick)~$ sudo apt-get install libcurl4-gnutls-dev
I wonderPosted by Mike Kristoffersen 27 Jul, 2009 17:27:58
The most explicit Mozilla coding standard I have been able to find is called the coding style guide and is found here: https://developer.mozilla.org/En/Developer_Guide/Coding_Style
It is however apparently not complete, and its more respected by some than others – I'll come back with generic examples of this in coming posts.
Rather than just point my fingers at what I think is wrong I have chosen to attempt to start a debate of what we want to accomplish with our coding standard – because this is not clear to me at the moment and I think we need a debate on it.
When we start to have a common understanding of why we want it, we can write this as the introduction to the coding standard, and we can go into a discussion of the individual rules and recommendations.
My primary reason for doing it this way is that, as long as we don't have a well defined purpose with the coding standard it is more a question of who likes what or randomly joins a discussion, rather than a judgment of whether a given rule/goal will bring us closer to the overall goal.
Some example goals could be (note, these goals are examples, they might, or might not be what I think should be our goals, nor are they all the goals I could think of):
1a) Make code execute as fast as possible
1b) Make code easy to maintain
1c) Make code easy to write
1d) Make code look esthetically pleasing
It is my strong opinion that the overall goals for a coding standard shouldn't be concrete things like:
2a) To have a common naming convention
2b) To keep code complexity down (for any of the ways to measure complexity)
2c) Make code compile as fast as possible
2d) Automate as much as possible
since these are ways to accomplish overall
goals, like “2a) To have a common naming convention” could be a way to get “1b) Make code easy to maintain” but it isn't a goal in it self.
I have cross posted this to both my blog (developer.mikek.dk) and mozilla.dev.platform to maximize exposure – discussions are welcome in both forums, but I recommend we keep it to the newsgroup for now – I'll try to keep a summary on the wiki at https://wiki.mozilla.org/Purpose_of_coding_standard
Mozilla coding hintsPosted by Mike Kristoffersen 13 Jul, 2009 16:20:17
So today I learned the hard way the meaning of nsAutoPtr<>. I started to use it when I copied a piece of code from another component that did something similar to what I was doing. What I didn't realize was the true purpose of nsAutoPtr<> which lead to a... shall we say crash in Fennec!
I (wrongly) assumed it was some magic kind of pointer that you could assign to and use as a normal pointer, well you can - if you know how it's supposed to work.
I imagine that nsAutoPtr was created to assist the developers in preventing one of the common mistakes, namly to forget to release (delete) an object that has been created (new'ed) dynamically. It is however very bad when you try to store pointers to the object in multiple places.
Let me first explain how I now understand how to use nsAutoPtr<>. An nsAutoPtr<> should be seen as a simple pointer that when you assign a pointer to it, remembers this pointer, but in the case where it already holds a pointer to something when you assign something to it (NULL or another pointer) deletes what-ever it held previously: // myObj autoinitialised to NULLnsAutoPtr<myType> myObj;
myObj = new myType(A);
// myObj now holds a pointer to myType(A)
myObj = new myType(B);
// The previous content myType(A) has
// been deleted and myObj now holds a
// pointer to myType(B)
myObj = NULL;
// myType(B) is now deleted
If you get a pointer back from an argument in a function call this is the way to do it:// prototype for example funcMyFuncReturningAnObject(myType **);
// When calling the function
// Don't do like this:
MyFuncReturningAnObject(&myObj); // THIS IS WRONG!!!!
// Do like this:
MyFuncReturningAnObject(getter_Transfers(myObj)); // This is CORRECT!!!!
// Declare tmp object as normal objectmyType *tmpObj;
// Get a pointer to the object you which to keepMyFuncReturningAnObject(&tmpObj);
// Store the pointer myObj = tmpObj;
Hence the good thing about nsAutoPtr is that as long as you only have one pointer to each object you are fine and keeping within the intended use of it, but when you need more complex patterns, you better be very careful about ownership and lifetime, or use something else.
Let me illustrate with an example:nsAutoPtr<myType> myObj1;
myObj1 = new myType(A);
myObj2 = myObj1;
// BE CAREFULL!!! - myObj1 now holds a NULL pointer
or another bad usage:
myObj = new myType(B);
myRawPointer = myObj;
// So far so good, myObj and myRawPointer both points to the same object
myObj = NULL;
// What myRawPointer points to has now been deleted!
or totally wrong as if you could scale wrongness (don't try this at home):nsAutoPtr<myType> myObj1;
// DO NOT ATTEMPT THE FOLLOWING
myObj1 = new myType(C);
myRawPointer = myObj1;
myObj2 = myRawPointer;
// So far so good, all point to myType(C)
// but beware - your code is doomed -
// as in "crash pending"!!!
myObj1 = NULL;
// The object is now gone but even you don't use
// any of the other variables, the code WILL go
// wrong when myObj2 goes out of scope, as
// the nsAutoPtr<> will try to delete what ever
// myObj2 points to at that time - assinging NULL
// to myObj2, will only make it crash faster
So this last one was what I attempted, with the two nsAutoPtr's wrapped into some 3'rd party code, different threads and a couple of function calls - a lesson was learned for me :)
GStreamerPosted by Mike Kristoffersen 07 Jul, 2009 22:27:16
I have now been working on the GStreamer integration in Fennec for
some time, and it is time for a status update on it.
The integration is going well, but has been haunted by some
issues, mainly to do with the different behavior of GStreamer on the
PC and in the device – it is currently unknown to me, how much of
this can be contributed to the fact that the version of the GStreamer
library is different in the two cases, and how much is due to other
My target device, the Nokia N810, comes with version 0.10.13 of
the library, while the current version on my PC is 0.10.22
The first iteration of the integration was based on work done by
doublec (see Bug 22540,
that holds the history for the work) using playbin as the decoder.
The result in the device of using playbin, was that the audio part
of the video played back as expected, but the video was following too
slowly (e.g. didn't play back at the proper frame-rate) – another
issue with that solution was that the playbin used the GStreamer
network routines to fetch data from the Internet, where we in
Fennec/Firefox would like to use necko as the source of data.
What I did was that I wrote a native element to communicate first
with Necko, and later abstracting it away from the basic necko
interface to use nsMediaStream as the source of data. This element
functions as the data source in the GStreamer pipeline that is build
when media content should be played.
During the time I have also moved from using decodebin to
decodebin2, as the folks over at #gstreamer told me that decodebin
won't be able to handle the audio playback as needed in the version found in the N810 (as a side note,
the original playbin solution also uses decodebin2 internally).
Talking about audio, let me explain some of the audio issues that
I have noticed – During development I have been almost exclusively
testing with mpeg clips as these were the first that came up when I
was looking for something to test with. There is no default
GStreamer element on the N810 that can decode the audio part of these
to a raw format, this means that decodebin and decodebin2 if left
alone will just send a “unknown-type” signal and leave the source
bin with the audio stream dangling.
I haven't found a way to link the “unknown-type” pads to
anything – but with the help from one of the guys on the #gstreamer
IRC channel on freenode.net I got it working by using decodebin2 and
the “autoplug-continue” signal.
The “autoplug-continue” signal is emitted every time a new
source pad is found and depending on the return type from your
handler of the signal it will either continue to try and decode the
stream or link the pad to itself and inform you about this with the
Different behavior and the problem with volume control
A difference in the behavior on the PC and in the target is that
while on the PC elements are found by decodebin2 that can decode the
audio part of an mpeg stream to a raw format such a decoder isn't
found on the N810.
On the N810 the decoding of the audio/mpeg stream is done by a
special element “dspmp3sink” that also takes care of the actual
audio playback - this sink isn't considered by decodebin2 – so the
trick in target is to use the “autoplug-continue” signal as
described above and abandoning the autoplug process when an audio stream is
There is one important difference between the PC and the N810 here
thou... on the PC we get a raw audio stream that can be linked to
different audio manipulating elements, like volume control etc. on
the N810 it's the mpeg audio stream we get out of the decodebin2
element, and you can't link this stream to the volume control element
(it's expecting a raw stream of numbers it can scale, not a
I'm sure there is a way around this, but I'm also sure that I
haven't found it yet :)
Another thing about audio is that the version of the integration
that I have on my computer currently is hard-coded to using the
“dspmp3sink” element, if the audio format isn't supported by this
sink element, the playback will fail.
Drawing video frames
Initially I forwarded an invalidate event to the main thread for
each video frame that was decoded by decodebin2 – this had an
unwanted effect as the decoding and the displaying engine ran in two
The unwanted effect was that it might decode a handful of frames
before the drawing thread started to draw, it would then invalidate
the screen as many times as there were piled up invalidate events
-not the best use of CPU cycles :) Btw I'm not to say if it actually
resulted in the same number of redraws as they should be coalesced
until the screen is actually redrawn.
The current solution ensures that there is ever only one
invalidate pending, but it looks like GStreamer is still trying to
decode every video frame, which in turn takes it's part of the CPU
cycles, it would be better to skip at least the color space
conversion for the frames that aren't going to be showed anyway (in
order to keep the CPU load low enough to keep the duration of the
video correct and the video in sync with the audio).
One could argue that the above is expected as using a fakesink
which is currently the way the video frames are extracted from the
pipeline is considered a “hack” in the GStreamer docs, the
recommended solution is to write a dedicated element to do this –
which much be my next task :)
This might also fix an issue I see sometimes, where the audio is
playing back, but there is absolutely no update of the screen until
the very last frame of the video.
RandomPosted by Mike Kristoffersen 01 Jun, 2009 00:05:26
I just returned to my home this evening after spending 4 days in Copenhagen, the weekend was for the Mozilla Maemo Danish Weekend
it was a really nice event, and great to meet with people from the community - the event was hosted at ITU in Copenhagen, and pictures are available on Flickr here
Why did I reboot?Posted by Mike Kristoffersen 30 May, 2009 18:18:21
I partitioned the disk of my MacBook Pro into three parts (one for OS-X, one for Linux and one as swap space) since I gave up booting Ubuntu from a USB drive.
Got it kind of working, but had the choice of booting Ubuntu from the USB or WinXP from the internal drive - since I couldn't have both and really wanted a "native" Ubuntu boot, I removed WinXP from the system (I can run WinXP from a virtual machine inside Ubuntu if I want to).
After the succesfull install I started to get a problem with seemingly randomly freezes of the Mac - bummer, it took me a day or two to figure out that it was because the machine was overheating. Yes, it was warm, but it has always been warm - guess it just got a little warmer :)
So how to fix it? I made a small app that monitors the temperature of the system and then increases the MinFanSpeed if it gets warmer - and decreses it again if it gets colder.
This is more safe than directly controlling the fan speed, as the worst that can happen if my program crashes is that the fan speed is set too high.BatTemp Monitor
Temp ↑65.5°C MinFanSpeed 5200 LFanSpeed 5191(5200) RFanSpeed 5191(5200)
The output from the program above tells me that the current max temp that is meassured from any of the temp sensors inside the MacBook is 65.5°C and going up - the min fanspeed is set to 5200 RPM and the actuall speed of the left and right fan is 5191 RPM, and has a set target of 5200.
The program takes measurements every second and increases the fan speed if the temp is > 65.5°C and going up, and decreases the fan speed if the temperature is < 65.0°C and going down. A simple program and the best is that my machine haven't had a freeze since I started to use the program :)
Why did I reboot?Posted by Mike Kristoffersen 30 May, 2009 17:18:21
So I had the problem that directories that had read and write enabled for all users on the system came up in an hard-to-read color on my MacBook Pro
So type export
, and see if you have an LS_COLORS
entry, if you do, you can change the ow=xxx to what you like (or any of the other entries).
The meaning of the numbers are:Attribute codes:
00=none 01=bold 04=underscore 05=blink 07=reverse 08=concealedText color codes:
30=black 31=red 32=green 33=yellow 34=blue 35=magenta 36=cyan 37=whiteBackground color codes:
40=black 41=red 42=green 43=yellow 44=blue 45=magenta 46=cyan 47=white
(Copy pasted from the output of dircolors --print-database
at the command line, or make it permanent by copying it without the "export" to your ~/.bashrc file (I put it at the end).