Skip to content

Ubuntu Indicator plugin for Pidgin

I’ve been a loyal pidgin user for a long time, and for the last couple of years it’s sat somewhat uncomfortably on the Ubuntu desktop. Obviously, Empathy became the default IM client a while back, but the more troublesome part, for me, has been Unity dropping support for the well established system tray icon specification in favour of their own Application Indicators. Ignoring the relative merits of the standards, and the questionable claim that you can unilaterally deprecate a widely used standard, Ubuntu has been good at providing indicator replacements for all the tray icons I care about, with one notable exception – Pidgin. Rather than providing a pidgin icon, they instead provided integration with the central messaging indicator. While this is a fine aspiration, I find the messaging indicator a very poor replacement – it doesn’t offer reasonable behaviour for showing and hiding pidgin and has problems dismissing new message notifications.

Initially, it was possible to turn on the system tray compatibility function in Unity with a simple dconf setting, but in 13.04, this was changed so that it would only work for java, and nothing else. In turn, this led to the creation of ppas for 13.04 and 13.10 to provide patched versions of Unity with the general purpose system tray restored (a one line change, apparently). I’ve been running with that for a while, but I didn’t want to swim upstream on this issue forever, so I decided to write a pidgin plugin that provides a proper indicator, with the same menu and behaviour as the tray icon.

It turned out to be an interesting exercise – creating indicators is extremely simple – all credit where it’s due – with the main challenge being building the menu without reinventing a wheel that’s already present inside pidgin. The pidgin tray icon (docklet is the internal name) is not a plugin, although there is a partial concept of different providers. Unfortunately, the interface can’t be used to drive an indicator as it assumes it can show the menu itself, while indicators require the menu be shown by the indicator. Ultimately, I had to copy the docklet code into my plugin to make the necessary modifications.

It would be possible to modify the docklet interface in pidgin to allow for an indicator provider with minimal impact on the existing providers, but I wanted to offer a working solution without requiring a newer version of pidgin, never mind the complexities of feeding changes upstream, etc. But, there’s an aspirational project there.

So, without further ado, I’d like to offer to all the stubborn people who want pidgin to work they way it always used to in Unity.

At some point I’ll get around to producing a deb for it, but just source for now.

Enjoy!

GVFS MTP Updates: Direct I/O and filenames in URIs!

Hi Everyone,

It’s been a while since my last update (over a month!) so it’s a good time to talk about what’s been going on.

Firstly, GVFS 1.16 is out – so that’s the first stable release with the MTP backend in it. w00t!

Before you wonder, it doesn’t include my work to support the Android Direct I/O extensions (that allow normal read/write access to files on the device). I’ve now got those to a point where I’m ready to get them in, but I’m waiting on a review in bugzilla. Since my last update, all the libmtp changes have been merged and released in version 1.1.16.

The second big thing I’ve done is completely change how mtp URIs work. In previous posts, I’ve talked about how I was putting entity IDs as path elements to save having to maintain an ID->filename mapping, and then relying on the gvfs display and copy name properties to make the files appear to have normal names when looked at. I ultimately decided to abandon this approach for a couple of reasons. The main one is that with Direct I/O support, every application that can operate on files can be used with an MTP device, and most of those apps don’t know anything about gvfs and can’t use the special properties. The second reason is that there are edge cases where it’s impossible to tell if you’re looking at a filename that’s all numbers or an entity ID. So, I’ve added a mapping system and URIs now use filenames.

Finally, I’ve fixed a bug in gvfs that only got triggered when unmounting an mtp device in Ubuntu 13.04 betas. The code in question hasn’t changed in gvfs for a long time, but the bug didn’t appear anywhere else. Still, there is a real code problem in there, so I’ve got a fix out for it.

I’ve updated my with builds that contain all these pending patches (although the raring gvfs got updated while mine was building so it’s now considered out-of-date) and the new libmtp, so please try the new stuff out.

For the curious, here are the GNOME bugzilla entries tracking these changes:

Enjoy!

Normal file read/write support with the GVFS MTP backend!

A couple of weeks ago, Han-Wen Nienhuys, the author of go-mtpfs, pointed out to me that Android’s MTP implementation includes a set of methods that allow you to do normal read and write operations on files without having to do the whole download/upload dance. With these extensions, you can expose files in the way that most people expect – you can just open a text file, picture, video etc, make changes and save it back. As a bonus, this functionality also allows you to do very useful operations like copy or move a file on the device.

I’ve now had a chance to put together an initial implementation of support for these extensions, and my PPA is in the process of rebuilding packages, so people can try them out easily. I’ve not started the upstreaming process on the GVFS changes as I still need to get the libmtp changes approved and upstreamed, but the libmtp maintainer has been AWOL for a few weeks now.

Obviously, it’s important to remember that these extensions are Android specific and won’t help you if you have a non-Android device, nor if your Android device doesn’t use Google’s MTP implementation (which, unfortunately, includes most Samsung devices).

You can grab Ubuntu packages from my ppa and the source is available on my github page.

gvfs MTP backend is merged!

At last! I’m happy to report that I merged the MTP backend to gvfs master yesterday. It’ll show up in the upcoming 1.15.2 release, and for Ubuntu users, I’ve updated my PPA to include builds for Precise, Quantal and Raring.

Enjoy!

More gvfs MTP backend news

Hi everyone,

Happy New Year! And the new year brings new updates on the gvfs MTP front. I received a bunch of useful feedback from the gvfs maintainers last month, which I was finally able to get time to sit down and address over the last few days. Accordingly, I’ve made a series of updates that fix a variety of things, from small memory leaks all the way to, finally, implementing the right way to tell Nautilus to do directory downloads/uploads in the right way (You’ve got to return a specific error code) – which fixes the one remaining functional gap in the code. Uploading a directory is still not fully working as I need to handle the way Nautilus ends up referring to uploaded directories by name when trying to upload their contents. Right now there’s no logic to remap the name to the MTP entity ID so the file uploads fail, but I know what has to be done.

I think we’re nearing the finish line, with respect to getting this merged upstream. *phew*

As always, the easiest way to try the code out is to install the packages from my PPA. I have also put a build of libmtp 1.1.5 in there so that unlock events and thumbnails work out of the box too.

Enjoy!

gvfs MTP backend update

Hi again,

It’s been quite a while since I wrote my gvfs MTP backend work, but that doesn’t mean nothing has happened in the meantime. Since then, I’ve improved the functionality quite a bit, including submitting patches to libmtp to support grabbing thumbnails and detecting “Add Storage” events (which you want to do so that when someone unlocks their phone, the phone storage automatically appears). I’ve also started the review process for submitting upstream (See GNOME Bug 666195), so hopefully we’ll see it upstream in the next couple of months.

More practically, and the main reason for writing this post, I’ve finally got around to setting up a ppa to host builds of gvfs with my patches applied. Learning how to set up a ppa was interesting, and pretty painless – so the end result is working packages for Ubuntu 12.10. Note that due to 12.10 only including libmtp 1.1.4, neither of the features I mentioned above is enabled in these builds (so you’ll have to refresh your nautilus window after unlocking). Perhaps I’ll throw a build of 1.1.5 in there too at some point.

You can find the ppa here. Enjoy!

Native gvfs backend for MTP devices

Hi again! It’s been a while since I’ve had something to write about, and it’s filesystems again, but with less April Fool’s.

What is MTP?

MTP is a standardised protocol that was originally designed to allow a PC to effectively manage the contents of a media player device – specifically, audio, video and image files. It is, in turn, based on an older specification called PTP (Picture Transfer Protocol) that was designed for use with Cameras. Note that neither of these uses cases has to do with managing the contents of an arbitrary filesystem. Of course, you can read more about MTP on wikipedia.

Why should I care?

Well, most people didn’t need to care about MTP for a long time – the chances were pretty good that their media player device didn’t use MTP (it either used USB Mass Storage or was an Apple device with its own crazy protocol), and their camera had a reasonable chance of using USB Mass Storage, and in the worst case you could always eject the memory card and use a reader with it.

However, since Android 3.0 (Honeycomb), Android devices have stopped using USB Mass Storage for PC connectivity, and switched to MTP. Now wait, you say, why would you use MTP to manage the contents of an arbitrary filesystem – a very good question. The primary reason is that USB Mass Storage is a block level protocol, and consequently operates below the filesystem layer. This means that it can’t be used to share a filesystem between the phone/tablet and the PC – only one device can read/write at a time. In older Android devices, this meant having a separate partition or memory card that was inaccessible to the phone while the PC was using it. But, from Honeycomb onward, Google wanted to have a more unified filesystem on the device, and not have to worry about ensuring there was a storage area that could be unmounted at random times. MTP may be an ill-fitting choice, but it’s the only standardised protocol which offers the key required feature – that being that you can have both the phone and PC use the filesystem at the same time.

This is posible because MTP treats files as the atomic data unit, rather than blocks. So you read and write whole files, and nothing smaller. At that point, the PC interacts with the filesystem pretty much like any other application running on the phone.

Ok, so I have to care about MTP to access files on my phone. Why’s that worth talking about?

Well, here’s the kicker. Your shiny new Android phone uses MTP, and there are a plethora of applications and components for Linux that notionally can manage MTP devices. Unfortunately, they are all limited in various ways, that make them pretty much unusable. The most common flaw is that the tools use an MTP library call that attempts to enumerate the entire device filesystem in one go. This causes the initial connection to be very slow, and on the newest Android devices, it flat out doesn’t work, as the phone end will timeout the operation before it completes. This ends up taking out every single tool on the market (including: mtpfs, gmtp, banshee, rhythmbox, gvfs gphoto2 backend) except for one: go-mtpfs.

go-mtpfs?

go-mtpfs is a recent creation of a Google employee, who was aware of the timeout behaviour of android devices, and so they wrote a FUSE filesystem (much like the original mtpfs) but which did on-demand file enumeration, as each directory is loaded and queried (which is how the Windows MTP implementation works). The end result is quick connections, and a tool that can talk to Android devices well.

So we’re done?

Well, no, not quite. Although go-mtpfs uses MTP the right way, it can’t avoid the horrible impedence mismatch between MTP’s file-atomic model and a traditionally filesystem where you can do random I/O within a file (ie: open, seek, read, write, close, etc). MTP only allows you to download a complete file, or upload one – you can’t even move a file between locations on the device through MTP (you have to download it, delete it, then upload it to the new location). Of course, FUSE doesn’t care that you can’t provide normal filesystem semantics, so you have to improvise. In this case, that means that any read/write operations requires go-mtpfs to do an elaborate and fragile dance to download a file, modify it, and upload it again, and so on. This causes simple file operations to behave strangely, and things can go very bad if you use a tmpfs for /tmp and try to access a very large file.

And so: gvfsd-mtp

And finally we reach the meat of this post. gvfs is the virtual filesystem layer that’s used by Gtk+ based desktop environments (GNOME 3, Unity, XFCE, etc). It happens to allow backends to implement a much higher level API than FUSE, and this API happens to explicitly offer ‘pull’ and ‘push’ operations (download and upload respectively). As such, it’s possible to meaningfully map functionality without jumping through crazy hoops.

So, I’ve been working for the last few weeks on a native mtp backend for gvfs. It’s heavily based on the existing gphoto2 backend, but only attempts to implement the operations that MTP can cleanly accomplish.

The end result is a filesystem like view that you can effectively browse through nautilus, and that you can download files from and upload files to with a reasonable hope of success. It’s not seamless as gvfs/nautilus do not do anything very special when you attempt an unsupported operation – so trying to just open a file will probably fail (although some GNOME apps like evince or file-roller seem to be able to detect the behaviour and will automatically download the file and open the temporary copy), but it’s functional and reliable.

Another thing I did was to avoid caching any metadata that I could avoid caching. All other MTP implementations tend to save directory listings after getting them and don’t go back to the device. But the device can be adding/removing files all the time, so it’s important to be able to get a fresh listing when you need it. To achieve that, I took advantage of the fact that gvfs lets you define ‘display names’ for files – which can be completely different from the reported filename. So I directly mapped the MTP object IDs (numbers) to the reported filename and made the real filename appear as the display name – so the view in Nautilus looks completely normal but the gvfs URI might be something like mtp://[usb:001,015]/65537/1/128/2445

This general behaviour is actually very similar to how Windows presents MTP devices – they appear as normal looking drives and folders and files in Windows Explorer but you can only really download and upload files – it doesn’t pretend to offer normal read/write operations on the files.

Ok, where do I get it?

Here!. I’m continuing to work on it – I should be able to provide thumbnails from the MTP metadata, and I’d like to implement a way to copy directories back and forth (gvfs doesn’t implement recursive push/pull for you – you have to do it yourself), but it’s otherwise very usable – it was proper plug/unplug detection and I modified the gphoto2 backend to not grab mtp devices like it used to. Eventually I’d really like to get it upstream (especially as you can’t build gvfs backends outside of the gvfs source tree) but there’s a fair way to go yet.

Enjoy!

πfs: The Filesystem of the Future!

I’m very pleased to announce that after eight years of research and development, I can present the world’s most revolutionary filesystem: πfs!

What is πfs?

πfs is a revolutionary new file system that, instead of wasting space storing your data on your hard drive, stores your data in π! You’ll never run out of space again – π holds every file that could possibly exist, so why put your files anywhere else?! They said 100% compression was impossible? You’re looking at it!

What does π have to do with my data?

π (or pi) is one of the most important constants in mathematics and has a variety of interesting properties (which you can read about at wikipedia)

One of the properties that π is conjectured to have is that it is normal, which is to say that its digits are all distributed evenly, with the implication that it is a disjunctive sequence, meaning that all possible finite sequences of digits will be present somewhere in it. If we consider π in base 16 (hexadecimal) , it is trivial to see that if this conjecture is true, then all possible finite files must exist within π. The first record of this observation dates back to 2001.

From here, it is a small leap to see that if π contains all possible files, why are we wasting exabytes of space storing those files, when we could just look them up in π!

Every file that could possible exist?

That’s right! Every file you’ve ever created, or anyone else has created or will create! Copyright infringement? It’s just a few digits of π! They were always there!

But how do I look up my data in π?

As long as you know the index into π of your file and its length, its a simple task to extract the file using the Bailey–Borwein–Plouffe formula Similarly, you can use the formula to initially find the index of your file

Now, we all know that it can take a while to find a long sequence of digits in π, so for practical reasons, we should break the files up into smaller chunks that can be more readily found.

In this implementation, to maximise performance, we consider each individual byte of the file separately, and look it up in π.

So I’ve looked up my bytes in π, but how do I remember where they are?

Well, you’ve obviously got to write them down somewhere; you could use a piece of paper, but remember all that storage space we saved by moving our data into π? Why don’t we store our file locations there!?! Even better, the location of our files in π is metadata and as we all know metadata is becoming more and more important in everything we do. Doesn’t it feel great to have generated so much metadata? Why waste time with old fashioned data when you can just deal with metadata, and lots of it!

Yeah, but what happens if lose my file locations?

No problem, the locations are just metadata! Your files are still there, sitting in π – they’re never going away, are they?

Why is this thing so slow? It took me five minutes to store a 400 line text file!

Well, this is just an initial prototype, and don’t worry, there’s always Moore’s law!

Where do we go from here?

There’s lots of potential for the future!

  • Variable run length search and lookup!
  • Arithmetic Coding!
  • Parallelizable lookup!
  • Cloud based π lookup!
  • πfs for Hadoop!

πfs: Download it today!

CrystalHD improvements

Hi all,

It’s been a few weeks since I last posted, and I’ve accumulated a couple of useful CrystalHD improvements that I think are worth talking about. First off, my comprehensive interlaced detection algorithm is now merged to the main git tree, as are the changes I’m about to talk about, so there are now no outstanding changes to merge.

Downscaling

The CrystalHD hardware is capable of downscaling, so that it will shrink the decoded frames before they are copied over to system memory. While all vaguely modern graphics hardware supports scaling, it’s still useful that the CrystalHD can do it, and that’s because it allows you to scale before the copy; smaller frames take less time and CPU to copy over. Normally, this isn’t an issue, but some videos can make the hardware grumpy such that the total time needed to decode and copy over the frame is more than the time available if you want to playback in realtime. So, being able to shrink the copy time can save you from an unplayable clip. Mind you, it’s a weird hardware/firmware bug that this is even an issue – I’ve been able to playback bluray video just fine but certain encoded files at much lower bitrates can trigger this slow decode behaviour.

To take advantage of downscaling you will also need to update your Mplayer as I had to make a change there to support FFmpeg per-codec command line options. With the latest code you can do:

mplayer -lavdopts o=crystalhd_downscale_width=[width]

to specify a width (eg: Use 1280 for 720p)

Packed b-frames

I mentioned this briefly in my last update, saying that the hardware has a bug where it would output certain frames twice when decoding a DivX/XviD video in an AVI file with packed b-frames. I implemented a work-around and thought that my work was done, but it turns out that there are at least two ways of indicating packed b-frames in a file, and one of them triggers the bug while the other does not. Sounds great, you might think – except that the files which don’t trigger the bug do still look like the files which do – which caused my work-around to kick in and ruin the playback.

So, I had to find another way to distinguish them. To achieve that, I ended up staring at a binary diff of two files and saw that they were using different frame types as placeholders for the packed frames. In the files that trigger the bug, they are “drop frames” and in the normal files, they are “delay frames”. Despite their names, I don’t believe a decoder is supposed to either drop or delay anything when encountering these; rather it’s supposed to replace them with the packed frame it received earlier. In other words, there’s a convention here that a decoder has to understand and respect, and it seems the CrystalHD is not completely up to speed with things.

With that difference identified, I was able to craft an additional test that lets us distinguish the two cases and now packed b-frame support is hopefully complete.

70012

While I don’t have anything meaningful to report in this area, I did spend some more time poking at the 70012, and while the existing code will likely yield something pretty close to a sane video stream, I still see discontinuities in the output, where frames mysteriously disappear, which very quickly leads to audio de-sync in Mplayer (which doesn’t understand the concept of a frame that fails to be decoded, so doesn’t know to re-sync the audio). I tried a number of different approaches, all with the same result – missing frames from files that play perfectly on the 70015. I know it’s possible to make it work, as both the gstreamer plugin and xbmc can do it; however, they are very different architectures that use separate input and output threads, which is not possible in FFmpeg. Ultimately, I’m not sure the support can really be improved, given the constraints of the FFmpeg architecture. Such is life.

Update: Reimar rightly points out that Mplayer can understand frames that fail to decode; I failed to remember the problem properly. What’s actually going on is that you indicate a failed frame by returning nothing; however, we only find out by obtaining the next output frame and seeing that it’s not the one we expected. At this point, returning an error would mean having to store the output frame for the next decode call and accepting that the input pipeline would increase by one frame. If that happens enough times, the pipeline will fill up completely and then we’re in real trouble. So, rather, I’m wishing I could return a frame and indicate that other frames had failed to be decoded at the same time.

CrystalHD support now merged in FFmpeg and MPlayer

I’m pleased to finally be able to announce that my CrystalHD support patches have been accepted into FFmpeg and MPlayer – if you grab the latest source from each of the projects, you’ll be good to go. As before, you’ll need the latest driver and userspace library from Jarod Wilson’s git tree. The driver that’s included in the Linux kernel’s staging directory, and the library on Broadcom’s website are both too old.

In terms of features, the merged code doesn’t differ a great deal from my original announcement – there’s now support for interlaced VC1 and MPEG4 Part 2, but interlaced H.264 remains problematic. I’ve got patches in my github tree that get very close to full support, but there’s still a corner case which can cause one type of file to play back incorrectly.

I have to say, I’ve learnt far more about interlaced encoding than I ever wanted to know, working on this project. I think it’s safe to say that I’ve spent at least 70% of my time working on it, and it’s still not perfect. With progressive content, it’s simple – you have a compressed frame going in one end, and an uncompressed one coming out the other end – there’s really no ambiguity (modulo the hardware’s odd treatment of packed b-frames where it will output the packed frame twice and you have to skip one of them).

But with interlaced content, on this hardware, you have to deal with multiple variations of input and output packing; check out wikipedia if you want a quick introduction to interlacing. When compressed video is packed into a container (like AVI or matroska), it is typically split up into packets, and those packets will correspond to compressed frames or fields, and this where the fun begins. With progressive content, a packet is obviously one frame, but with interlaced content, it could be one field or two fields – sometimes the container or video format will enforce that it is one or the other, and sometimes both are valid. On the output side, the hardware, in its infinite wisdom, will sometimes output individual fields or a full frame of two fields, without much rhyme or reason. And naturally, with h.264, all four combinations are possible. That’s bad enough; to add insult to injury, the flags that the hardware is supposed to set to identify the fields/frames is bogus – meaning that it’s simply not possible to distinguish three out of the four cases until it’s too late. With a little help from FFmpeg, I was able to identify one of those three formats, but for the last two, I had to use a method that relies on peeking ahead at the next frame/field – which is great when it works, but sometimes the hardware hasn’t decoded the next frame/field sufficiently to answer the question, and then I just have to guess.

It should come as no surprise that all the other projects that support CrystalHD have punted on interlaced support 🙂

As for the future, I still need to get the improved interlaced support merged, and then I have to start looking at how to support the older 70012 hardware. This chip is very sensitive to the rate that frames are submitted to, and retrieved from, it; the code today will work enough that you should see frames most of the time, but the hardware will do odd things and drop frames or stall, so the experience isn’t good at all. I can at least take comfort from the fact that Broadcom’s gstreamer plugin and XBMC have both got it working.

Onward and upward, etc!