Skip to content

Slackware Linux 12.0 on the Dell XPS m1330

I’ve just written an article about running Linux on the Dell XPS m1330 laptop. It’s long and should have lasting value (I hope!) so I didn’t type it into this blog entry. Instead it’s available here.

The quick summary is that I think it’s a great laptop and Linux runs really well on it. Bravo Dell!

Suspend not as suspenseful as once thought

Shock! A blog post that doesn’t have anything to do with SD cards. :-)

My personal life has been rather hectic over the last few weeks so I’ve had precious litte time to do any real hacking, but I did manage to find time yesterday to get Linux up and running on my new work laptop. It’s a Macbook Pro, and the first piece of Apple hardware that I’ve ever had in my possession (I’m pleased to say I’ve managed to never buy any Apple hardware). It’s one of the new 15″ Santa Rosa models and it’s not too bad. Later, I think I’ll write a more detailed description of what it took to get going, but right now I just want to make an observation about Suspend/Resume.

And that observation is that we’ve really come a long way. I was able to do a raw suspend (echo ‘mem’ > /sys/power/state) and resume and it actually came back to life. There are admittedly some niggles with the graphics (nvidia binary driver) which means you need to vt switch away from X and back after resuming to get your picture, but I was amazed to see the wireless and USB come right back.

And it gets even more amazing. With suspend/resume happiness fresh in my mind, I decided to be brave and see what my desktop machine could do. It’s a Athlon X2 based machine with nvidia chipset and graphics and an Audigy2 sound card. Traditionally, suspend/resume has had a worst record on the desktop because you’re more likely to encounter a driver that can’t cope but I figured I’d give it a go – the most important requirement was that the HD controller driver (sata_nv) was suspend aware and this support went in around 2.6.20 – nobody wants to corrupt their disks when trying to suspend (I’ve been there before).

Like most desktop’s, my one’s BIOS offers a choice between S1 and S3 suspend, and I tried both of them out – and surprise, surprise, they both worked! Everything came back like I left it.

I also took the opportunity to use my friendly Kill-A-Watt to see what the power consumption was like, and I made some interesting observations. My desktop seems to idle around 120W which can rise to around 170W under non-graphics load (I don’t know what a 3D load would do, but I could see it pushing up to around 250W). In S1, it drops to 85W and in S3 to 10W. Amusingly, it still pulls 7W when ‘off’. That seems like a hell of a lot just to support Wake-On-LAN!

But the moral of the story is that a modern system with a modern kernel seems to have a pretty good chance of Just Working(tm) when it comes to suspend and resume.

Updated kernel for maemo 3.2 (4.2007.26)

I managed to find a few minutes and put together the promised patch series for the latest n800 firmware. This series adds support for MMC 4.x wide-bus support and restores 48MHz highspeed support. I did some basic testing with my n800 and confirmed that transfer speeds are now back to what they were with my older patched kernels.

As I said previously, Nokia made deliberate decisions to exclude both of these features from the official kernel, so you might perhaps approach my kernel with some caution, but the older iterations worked well for most people, including myself. As always, if you break your n800, you get to keep both pieces. :-)

You can find the patches and the prebuilt kernel image here.

Missing MMC/SD features in the latest n800 kernel

As promised last weekend, here are the results of my investigations into the latest kernel in the 2007.26-8 kernel. As I previously observed, the transfer speed is capped at 24MHz even though the device is capable of operating at 48MHz. This cuts the maximum transfer rate of high-speed SD and MMC cards. Apparently this was done for stability reasons, but I never encountered any problems with my patched kernels.

In addition, after some tests and a careful read of the diff, I’ve discovered that they have deliberately removed 4-bit MMC support. I know this is deliberate because they’ve included the high-speed MMC changes that went in to the mainline kernel at the same time. So, if you’ve got an MMCplus or MMCmobile card, this is doubly annoying as your card is now running at 1-bit 24MHz instead of 4-bit 48MHz. This translates to a 3MB/s transfer rate, instead of 13MB/s – that’s quite a bit. I don’t know the reasoning behind this change but I do know I did not observe any problems previously and never received any reports of problems. Perhaps someone (Daniel?) can shed some light on this.

On the bright side, they’ve taken the trouble to back-port the MMC-HC support (This is the MMC equivalent of SDHC) from the latest mainline kernel. The only catch is that these cards are impossible to find. Pierre was only able to test his MMC-HC change because Nokia provided him with a pre-production card they managed to wrangle out of some manufacturer (probably Samsung). I don’t know what the problem is, but it seems MMC is really struggling to maintain relevance and problems like this don’t help. I have somewhat mixed feelings about it because the MMC standard is, at least notionally, a public and freely implementable one – they just charge to get a copy of the spec. On the other hand, while the full SD spec is still locked up behind a crazy NDA, they have published very useful simplified specs which has made it easier to implement SD features than MMC ones. Ah well.

Finally, there have been multiple reports of the new kernel b0rking up cards and causing other kinds of problems (see my previous post‘s comments for links and details). I don’t know why this is, and I haven’t personally observed any problems, so I can’t offer any suggestions or solutions at this time.

Anyway, the punchline of all of this is that I definitely have good reasons to build a custom kernel to restore 48Mhz and 4-bit MMC support. I’ll be trying to work on that this week, but my personal life is pretty hectic right now, so I do rather doubt it will be done that soon, but I’ll certainly let you all know when it is.

Latest n800 firmware includes SDHC support

I’ve been on holiday for the last week and a half, so I missed the firmware update while I was away, but I’m sure most of you are aware that a new firmware for the n800 is out and that is includes SDHC support. This means that you don’t need a custom kernel with my patches to use these cards. I’m not quite sure what to make of the statement that support only goes up to 8GB, as I can’t think of where any code cares. The SDHC spec defines sizes up to 32GB and the FAT32 filesystem is functional (if very inefficient) to much larger sizes. So I have to assume it’s some higher level part of the stack and not a kernel issue.

I still need to do more investigation, but from looking at the osso52 patch, some of the patches in my old patch set are still not present, and it might just be worth offering a custom kernel with those added capabilities.

Here’s my patch list and the current state of osso58 with respect to each:

  1. 0001-mmc-update-19: Now merged
  2. 0002-mmc-update-19: Now merged
  3. 0003-highspeed-caps: Not merged – might result in problems with some 4GB MMCplus cards
  4. 0004-low-voltage: Not merged – but not needed because of how 0005 is handled
  5. 0005-omap2-low-voltage: Not merged – but an equivalent change that hacks around the absence of 0004 is present – so the functionality should be the same
  6. 0006-omap2-highspeed: Not merged – only needed with 0003
  7. 0007-debug-output: Not merged – just debug output

So, the main missing thing is the set of timing checks that, unintentionally, make some MMCplus cards work. I will try with one of these cards soon and report if the changes are still needed. Additionally, a separate change has been made in the Nokia kernel that caps the MMC frequency at 24MHz where the maximum was previous 48MHz. I assume they have a good reason for this, but I had good results running highspeed MMC and SD cards at 48MHz and there is a real improvement in transfer rate.

If anyone has specific experiences, please add a comment. I will do more investigations and report, possibly with a new kernel, later this week.

VMware Workstation 6.0

Christian and Alex have already beaten me to pointing out that the latest and greatest release of VMware Workstation is now out in the wild, but I’d still like to take a moment and congratulate their, and the rest of the team’s, efforts. I was essentially part-time on this release, with my focus on work that has yet to see the light of day, but which I hope to write about in the reasonably near future – so I have something of an outsider’s perspective on things (although, if the Linux guest multi-head support is annoying you, it’s probably my fault :-))

It was really challenging and I think they did a great job turning out a solid product. Many people have noted that work expands to consume available resources and that’s definitely true for us – as much as we grow the team, we’re also trying to do more – so the hard trade-offs of time vs. quality and features is always one that has to be made.

As Christian has noted, we’ve taken additional steps to provide better desktop integration, but there are still many opportunities left in this area as more cross-desktop standards emerge and the existing ones become actively implemented. Readers with long memories will remember when I bemoaned the lack of support for the autostart spec in GNOME – it went in with 2.14, so now it’s trivial to configure an application to start up at login in any of GNOME, XFCE or KDE. It may seem like a simple thing, but if you need to do it and the support isn’t there, it’s actually a really hard problem to solve in any vaguely sane way.

SDHC kernel for maemo 3.1 (3.2007.10)

Unsurprisingly, people have been pretty anxious to get an updated SDHC enabled kernel for the latest n800 firmware release. The source was released earlier today and I’ve built a new kernel with the relevant patches. It’s available in the usual place. Enjoy!

Updated SDHC/MMC4 enabled kernel for the n800

Since my last post with my updated n800 kernel, I’ve spent some time looking at a troublesome MMC card – it’s a Transcend 4GB MMCplus card. It’s one of these technically out-of-spec byte-addressed 4GB cards, but in theory it should work just fine. I was alerted about the card by Frantisek Dufka who bought one and has had a lot of trouble with it in the n800, so I picked one up to take a look at what was going on. Now, unfortunately, I was unable to replicate Frantisek’s problem which remains unsolved, but I did see another problem – where the card would almost always freak out when switched into highspeed mode – yet the card worked just fine in highspeed mode in laptop’s SDHCI reader.

At around the same time I was looking at this, Pierre Ossman was doing some investigating of his own and concluded that it was not safe to assume that all controllers can handle the MMC and SD highspeed timings. These are close to the legacy timings, and to each other, but all three are slightly different. Accordingly, he made a change upstream to add explicit capability flags for highspeed support and to default that support to off. I took this patch and applied it to my n800 kernel, and then being the foolhardy guy I am, marked the omap reader as highspeed capable. I did this becaue I knew that my other highspeed cards work fine, but I did not expect it to make the Transcend card work – yet it did!

The only change that Pierre made that could cause this was that he updated the host controller with the current card flags after toggling the highspeed mode. I believe that this update, which is technically gratuitous as the controller state is already correct, introduces a delay that smooths over whatever was confusing the card originally. Now, I can consistently use the Transcend card.

An additional thing I investigated was why my dual-voltage MMCmobile cards were not running at 1.8V even though the omap controller claimed to support low voltage operations. When I looked into this, I saw that the MMC subsystem was interpreting the voltage flags incorrectly, apparently due to being based on an inaccurate spec that originated from Sandisk. This caused the code to interpret the actual low voltage support bit as a higher voltage and a set of reserved bits as indicating low voltage support. Once I fixed this, low voltage cards were able to run at 1.8V in the n800. This is nice because low voltage operations means lower power consumption and longer battery life. The only catch is that when I tried out the 64MB RS-MMC card that originally came with the Nokia 770, the card freaked out despite claiming to support low voltage operations; it would work fine at 2.0V or higher. Very strange.

Anyway, I’ve built a new kernel binary with all of these patches applied and it is available here. If you’re feeling cautious about my highspeed or low voltage patches, you can build your own kernel and exclude the patches you don’t want – but it would be great if you could try the full kernel out and report any problems you observe.

Update: So, I didn’t explain how to interpret the debug output, and that would probably be a good thing to do:

  • clock: The clockspeed the card is being run at.
    • MMC: 20Mhz
    • SD: 25Mhz
    • Highspeed MMC: 26Mhz or 52Mhz (clamped to 48MHz by the controller)
    • Highspeed SD: 50Mhz (also clamped to 48MHz
  • vdd: The selected voltage. There’s a long list but in practice, you’ll only see three values used.
    • 16: 3.0V in the internal slot
    • 15: 2.8V in the external slot
    • 7: Low-voltage. 1.85V in the internal slot and 1.8V in the external slot

    Yes, that means that the two slots actually run at different voltages. The external slot supports a wider range of voltages but as the lowest possible is always chosen, you won’t see them used.

  • width: The width of the data bus
    • 0: 1 bit
    • 2: 4 bits
  • timing: The timing mode. You can deduce this from the card type and the printed clockspeed, so this isn’t very profound.
    • 0: Lowspeed (same for MMC and SD)
    • 1: MMC highspeed
    • 2: SD highspeed

The story of a humble codec

Here’s another tale that I’ve been meaning to tell for a while. I think it’s the last one for now. :-)

Way back in Workstation 5, we introduced the ability to record movies of activity in a virtual machine, and to this end we devised our own codec. Now, that’s probably enough to generate a hail of bread-rolls from the cheap seats, but bear with me.

We have a remoting protocol allow for interaction with a virtual machine at a distance and it is largely VNC with some small extensions, so it seemed very natural for us to use this as a basis for our recordings – we could just dump the VNC stream into a file and write a codec to play it back. And this is in indeed what we did – except that we only ever wrote a windows codec and never provided a linux equivalent or useful documentation. It was a source of much frustration for me – how could we stick this feature in our Linux product and offer no credible way to play back the resulting recordings??

Naturally, the MPlayer and ffmpeg crowd didn’t take long to work out how to use the windows codec with their win32 loader but that’s horribly suboptimal and last year, some people managed to reverse engineer the format (it’s pretty easy once you realise that it’s just VNC) and added native support to ffmpeg. The latest release of MPlayer features this support.

Not being a regular MPlayer user, I didn’t realise this until the beginning of the year – and once I did, I saw that there was a page in the multimedia wiki describing the format. Now that we had a ready made forum to document the format, I suggested to one of the guys who did a lot of the original work (Hi Tony!) that he should update the wiki and fill in the gaps and correct the mistakes – which he then proceeded to do. It’s a small gesture, but the ffmpeg folks appreciated it and I hope we can look forward to complete support in a future release.

A lot of the time, the failure to publish this kind of information is more a function of logistics than proprietary paranoia – it’s a lot easier to directly update a wiki page than to try and get a page added to the official company website!

Unbounded growth

As some you may have been aware, this year’s XDevConf was held at the beginning of the month (and much thanks to Sun and Stuart Kreitman for putting it on!) and I was fortunate to be there for some of the talks and to give a small presentation of my own of the Virtual Multihead feature in the up coming VMware Workstation 6.0. As you might expect, the challenge of getting X multihead to work on both the host and the guest is not to be taken lightly and despite my best efforts, the presentation was not without complications (primarily due to a bug in some of our code that I fixed immediately afterwards…) but most importantly, my discussion of some of the limitations I had to deal with (including the old classic of not being able to resize above the initial screen resolution) led to a very productive conversation with Andy Ritger and Aaron Plattner from nVidia where they explained how to resize as large as you want, and I was able to make the requisite changes in the driver pretty quickly that afternoon.

I think this illustrates one of the perennial problems with X and the cannonical X.org server implementation – that there’s a large collection of accepted wisdom and very few people who know enough to be able to verify what is still true, used to be true, or never was true. The ability to resize the screen (or lack thereof) definitely falls into this category. Originally, we all believed it couldn’t be done – as reinforced by the VidMode extension that would let you resize the viewport, but not the screen itself. Then Keith proved that you could resize the screen and the root window smaller than the initial size when he did the original Xrandr implementation. And for most observers, that was and remains the status quo. But most people (me included) didn’t really understand what the problem was that prevented resizing larger than the initial size – and at the current moment in time, it seems the only real problem is XAA – the old and obsolescent X Acceleration Architecture – which is on the way out in favour of EXA. If you don’t need XAA, you can resize the framebuffer as you please and grow larger than the initial size. The nVidia drivers don’t use XAA (or EXA for that matter) in favour of their own acceleration architecture and consequently were able to fix this problem. Thomas Winischhofer has also apparently fixed it in his closed-source SiS drivers, but I’ve not seen it for myself or heard from anyone who has. The other drivers all seem to remain stuck, although the intel driver apparently supports resizing the X screen larger but still backs it with a fixed framebuffer allocation.

As for the VMware driver, we had a bunch of code to implement XAA hooks but due to the nature of virtualization, we pay a very high penalty whenever an unaccelerated operation appears in the stream – so high in fact, that it cancels out any performance gain from providing acceleration in the first place! As such, we disabled this “acceleration” back in Workstation 5.0. So, I was happily free of any XAA encumberances and I could make the change (and delete all the dead XAA code as well). The final piece of the puzzle is knowing how to resize the framebuffer and the screen’s root pixmap. Eric Anholt had mentioned this to me last year as being something you’d have to do, but I got the impression it was a scary thing to do with implications that no one really understood, so I didn’t really pursue it. Of course, it turns out that there are actually calls you can make to do this and once I was informed by the nVidia guys, I was able to grep my way to understanding by looking at the intel driver which uses these calls to do rotation (and also confirm that no other drivers are doing it).

So, the new 10.15 driver has been released and will be present in Workstation 6.0 but you can go grab it now and it will work with Workstation 5.5 just fine. As a bonus, I also pre-populated the mode list with a fairly comprehensive set of standard 4:3, 16:9 and 16:10 modes because the X server will filter out any modes in your xorg.conf larger than the initial one – so even if you physically can switch to them, they won’t be there.

I think that with Xrandr 1.2, we’ll see this problem fixed in a lot of drivers, because it’s not going to be possible to dynamically add monitors in a useful way without being able to resize up. I’m glad it’s finally happening, but it’s also rather amusing that it was always possible but so few people realised it.