|all our code are belong to you|
Prijava na novice
RSS novičarski tokovi
Ime : planet2
URL : http://planet.kiberpipa.org/rss20.xml
We visited a wonderful exhibition a few days ago about two failed Antarctica expeditions led by Scott and Shackleton. I learned a lot, but what I keep thinking of are Frank Hurley’s glass plates.
Frank Hurley was the official photographer on Shackleton’s Trans-Antarctic expedition (1914-1917). Shackleton and his crew encountered ice farther to the north then they expected and eventually got stuck in it. It is impossible to imagine how they felt when their efforts to free their ship Endurance proved unfruitful and had to spent arctic winter on it. Ship had to be abandoned by the end of October 1915 before surrounding ice crushed it. Crew then spent next few months on floating ice. They decided to seek safety on land which meant hauling what they needed by themselves and leaving everything else behind. Which brings us to Hurley’s plates.
These were early days of photography and equipment was larger than it is now, but even today’s gear for taking large format photographs is pretty big and heavy as are prints made with it. Hurley had two cameras, a large format one capturing images on glass plates and a smaller portable Kodak using film <sup></sup>. He made more than 500 images on plates which were too heavy to carry them all, but he managed to persuade Shackleton to pick and keep 120 or 150, depending on source <sup></sup>.
Then he broke the rest.
I couldn’t find an explanation for his action in one of the most inhospitable places on earth where environment should soon enough destroy those plates without his help, but I think I know why he did it.
Expeditions were expensive and selling publishing rights for images made during trip was an important source of funding. Breaking plates ensured them that their images would be the only ones that could be printed. Breaking them was in essence an enforcement of copyright.
There is no moral to this story. It was doubtless a difficult decision in even more difficult situation. But it is hard to look at remaining images and not feel a real sense of loss. Such a waste!
If you are in London these days, then go and see this exhibition which you can find at Queen’s gallery just next to Buckingham’s palace until April 14th. I bet you won’t regret it.
- Kodak, one of pioneers of photography, recently abandoned making cameras to solely focus on their soon to be dead image printing business.?
- Hurley continued documenting their ordeal with Kodak. Resulting images are more grainy and less sharp, but equally fascinating document of what they all went through.?
Michal Migurski recently posted an article about download sizes of popular websites. I couldn’t replicate his results<sup></sup>, but it is obvious that gist of Michal’s article is correct, websites have indeed ballooned significantly in last few years.
This blog’s homepage has a footprint of around 250KB-270KB<sup></sup>. About 90% of its size are fonts and jQuery which is a big penalty for making it look and behave a bit nicer. So should I remove those parts?
Well, for most visitors to this website that difference doesn’t matter. Pages for them are neither slow nor expensive to load. Unless of course they are doing it over your average hotel Wi-Fi or a slow mobile network where speed around 56Kb/s is not unheard of. On such connection it would take about half a minute to load this blog. It can also cost more than 10 euro cents to load it when roaming in Europe.
It would be great if I could offer a choice of serving bigger and nicer or smaller and faster version depending on visitors needs.
However there is no way to measure price of a visit. Even if I could, how would I decide what is too expensive for an anonymous reader and should I make such decisions at all? I think not.
Gmail’s approach is really just a band-aid over what should be a visitor’s decision. I use same laptop and browser at home and while I travel, experiencing all combinations of connection speed and pricing. I never know how much it will cost me to visit a page, but I always learn quickly if I would prefer something small or full-featured. There is just no way I can communicate that preference.
It would be great if my browser had a switch for this purpose, like Firefox’s “Work Offline” toggle. So if I switched to bandwidth saving mode, then every subsequent request to web server would communicate my preference with a HTTP header field like:
In principle you could have multiple levels of bandwidth consumption, but that would likely be an overkill. Common practice suggests that at most two levels would really get used, one aimed at mobile devices and other at desktop.
Right now such functionality doesn’t exist or at least I couldn’t find it (I even searched Mozilla’s bug database for any future plans). I think my proposal is both user and developer friendly and workable. If you can think of a reason why it would be problematic, then I would really like to hear it.
- Depends on browser used. Variation in sizes is probably due different formats of fonts used by browsers. It also changed once I published this post.?
My dad is an avid model airplane builder and he is currently building a new electrically-propelled glider of his own design. This particular airplane will have styrofoam-core wings and two weekends ago he was trying out a new way of making them.
Such wing profiles are traditionally cut out from blocks of styrofoam with a hot-wire cutter that is pulled by hand over plywood templates. However the width of the cut depends on the speed of the wire as it travels through the material and since you can't manually pull the wire at a perfectly constant speed the wings end up full of little pits and grooves. Of course, nowadays you can get computer controlled cutters that control the hot wire with servos and can cut out any shape with perfect steadiness and without the need for templates.
My dad however went for another approach: he made a purely mechanical device that pulls the wire over the templates with constant speed. The interesting bit here is that the two ends of the wire usually have to travel different lengths through the styrofoam in the same amount of time, depending on the wing taper ratio. He achieved this with an adjustable system of levers, pulleys and ropes that would make for a nice high-school mechanics class demonstration.
The whole thing is powered by gravity and an occasional nudge by hand. In fact, for perfect straight cuts just the weight of the hot-wire cutter is sufficient without additional mechanisms.
As you can see from the (long) video below, it takes one to two minutes to make one cut (and you need several per wing, depending on the number of segments), so it's not the fastest thing around. But it makes up for it with perfectly smooth cuts and if you only make a few wings per year it's perfectly sufficient.
(watch the video on YouTube)
A while ago Ga?per gave me this Energycount 3000 kit from Voltcraft for logging household electrical energy usage, with a wish that he would like to access its measurements from a computer. All this time it's been mostly gathering dust on my desk, but last week I've found some time to give it a closer look and made a few discoveries that are worth reporting.
The box contains two sensors that can be connected between a wall socket and a plug and a tiny battery-powered remote control for reading out the data. The short instruction leaflet explains that the sensors broadcast their measurements through radio every 5 seconds. With a push of the SCAN button you can set the remote control into a 6 second listening mode which catches the transmission and displays it on the small LCD screen.
The remote also has some calculation functions, like predicting the next electricity bill, but is otherwise nothing more than a remote display for the sensors, which apparently do all the logging. This makes sense: the remote control is limited by its batteries and as much functionality as possible is pushed to the wall plug where there is abundant power. The radio link is also obviously unidirectional. The sensors transmit their periodic reports and the remote receives them. All user interaction with the sensor is done through a push button on the sensor itself.
To get the data from the sensor it appears that all I would have to do is to eavesdrop on the transmission. Unfortunately, the box says this device operates on the 868 MHz ISM band, meaning that my 433 MHz receiver was useless. I could get a 868 MHz receiver module for it, but I suspected that these devices use something more complicated than on-off keying, so I looked for other possibilities.
Here is how sensors and the remote look from inside:
As you can see each device has two integrated circuits under a blob of epoxy. I'm guessing one is a microcontroller and the other obviously some kind of integrated ISM band transmitter or receiver. In both the sensor and the remote control they are connected with 5 copper traces. Having recently encountered and worked with chips like the CC1101, my first guess was something like that. I figured the five traces would carry a digital bus like SPI or I<sup>2</sup>C, so I soldered some wires to the tiny traces on the remote (destroying one of the termination capacitors in the process) and hooked it up to a logic analyzer.
Unfortunately, what I saw there didn't look at all like a synchronous transmission. Out of 5 lines, 2 seem not to carry any useful signals (all I saw there were some transients that looked like glitches). The remaining 3 might act as a digital bus immediately after the microcontroller wakes up the radio chip, but otherwise look like some pulse-width modulated signals for the majority of the 6 seconds when the radio is turned on.
This was a sort of a dead-end until I came across chips like the TH81112. These are ISM band receivers that can be used to receive ASK or FSK transmissions, but are much simpler that system-on-chip products like CC1101. They merely contain a tuner, intermediate frequency and a phase detector and therefore rely on the microcontroller or some other logic to do full FSK demodulation. In hindsight, it makes sense for Energycount to use something like this. It's a relatively cheap product and an expensive general-purpose transceiver like the CC1101 would probably be far too expensive for it.
But at this point I didn't bother to mess further with the original receiver. I started up GNU Radio and Fun Cube Dongle Pro and tried to catch the transmissions with that. It turns out fishing out the correct channel in the 868 MHz ISM band is a challenge in itself if you're only limited to the 90 kHz bandwidth of the Fun Cube Dongle. But luckily having the transmitter and receiver close by meant that I was able to see the transmission burn through even when the receiver wasn't tuned exactly to the correct frequency. So after some bisection I found the transmission at 868.388 MHz.
The double peaks in the spectrogram gave me more confidence that this is indeed frequency-keying with 20 kHz deviation and with GNU Radio's FM demodulator block the bits in the packets became clearly visible.
Actually, it's impressive how easy software defined radio makes tasks like this. Throwing blocks around in the GNU Radio Companion in a few minutes is something that would otherwise take you weeks with a soldering iron. Next time I'm doing something similar I'll most likely skip the whole logic analyzer part and just skip right to sniffing the radio waves.
To conclude, this demodulated signal is now something that can be piped to the capture process from my AM433 project and it will hopefully produce binary data. But of course, getting useful information from a binary blob is a whole new matter and that will come in a follow-up post.
Last week Eric Smith found a bug in z80dasm, the disassembler for the Zilog Z80 microprocessor I put together a few years ago when I was researching Galaksija's ROM. It turns out a corner case in relative addressing where the offset would wrap around the 16-bit address space boundary of the CPU wasn't handled correctly. For such cases the code would create labels with excessively long names which overflowed some internal fixed-length string buffers, leading to stack corruption.
Hence a new release of z80dasm after almost 4 years. You can download the source tarball from http://www.tablix.org/~avian/z80dasm/.
I also used this opportunity to move the code from my old CVS repository to git, so you can now also clone the repository with:
$ git clone http://www.tablix.org/~avian/git/z80dasm.git
As for binary packages, Eric is packaging z80dasm for Fedora. I've put together updated packages for Debian and will do my best to get them into the Debian Unstable as soon as possible (as Debian is now three releases of z80dasm behind).
One of the more unusual sights at the 28C3 last year in Berlin was a circle of hackers of all ages and genders sitting on the floor around a large flat panel monitor. They were watching a cartoon show with colorful ponies, while behind the screen you could see the dark basement with flags of various groups and hackerspaces hanging from the ceiling and a disarray of electronic devices strewn on tables. Maybe not surprisingly, this gathering also happened to be almost immediately below the no photography sign, which means that I was unable to find any evidence of it.
I have heard about this show now and then in the usual background noise of the web even before the congress. However this encounter made me somewhat curious and, having never actually overgrown watching an occasional cartoon, I decided to see what all this is about. For scientific reasons, of course.
Image by Chromamancer
I am talking about the latest remake of My Little Pony. It's an animated TV series that was created to advertise a line of Hasbro toys for girls, but in a funny turn of events actually got a sizable following of the opposite gender and in another age group. I guess something that would be unlikely to happen without the internet and the pseudo-anonymous discussions it enables.
After watching some of the show, I can say that I can understand to some degree why it has attracted such a broad audience. They say that a sign of good content for children is that it is also worth watching for adults. And in this particular example the latter certainly appears to be true.
You can often read that the show is well drawn and animated. For sure that is one part of the attraction. Remember my thoughts about the Avatar movie? I guess modern artists have become increasingly good at triggering emotional responses in their audience. The combination of human and animal visual traits in a character allows you to trigger more image recognition circuits than what is possible with only a human face. There's actually a word for that: superstimulus and it's quite amusing to see the parallels between songbirds falling for a red stick with white bands and people looking at what would be pretty deformed body shapes in nature.
Another thing that is often mentioned is the lack of cynicism in the show. Not surprisingly for a show that also carries an educational mark, the characters always find out that it's better to work for the greater good than for only your personal benefit. I can certainly see the appeal in this. It's nice to loose yourself in such fantasy after spending day after day in society that is increasingly focused on grabbing as much of the pie for yourself as possible and there isn't much space for laughter, truth, generosity and loyalty.
However, if this is one of the causes for the popularity among grown-ups, it's somewhat ironic if you consider why the show has been created in the first place: to increase profits of a multinational company that turns around billions of dollars each year. And in fact Hasbro has controlled the show from the start, making sure that it features things they can sell as toys.
But if you put these arguments aside, the creators of the show managed to paint a pretty consistent picture of a world where things are operated with hooves and mouths instead of fingers and three races divide their work to cover for each others shortcomings. This has made it possible for fans of the show to build upon it with their own original stories and, not having to stick to the original constraints, some of them are just as, if not more amusing than the original itself. So it's not all in the visual appearance either.
To sum it up, once you survive through the first view of the pink sugary overkill of the brand logo, it's an enjoyable show with memorable songs and a surprising lack of elements that could be described as fitting for a stereotypical little girl. If it makes more people think about their actions from the perspective of broader society, so much better. And if marketing departments can miss their target audience that much, I guess that's also a sign to be optimistic and might mean we have some time left before they figure out how to control our every thought.
When I was processing the raw measurement results from Munich experiment I noticed that absolute timestamps on spectrograms recorded by two VESNA nodes were differing randomly from zero to three seconds and required manual alignment.
The experimental setup was not expected to give very precise time reference: each VESNA has its own free running real-time clock stabilized by a 32.768 kHz quartz which was providing time relative to the start of the measurement. Absolute time on the other hand was provided by two Linux running laptops to which the sensor nodes were sending the data. So all in all the accuracy of our timestamps depended on four quartz clocks.
Nevertheless this result was surprising to me. I expected these four clocks to drift apart in the three hours of measurements, but this drift should be uniform. I can't explain what could cause the difference between clocks to randomly change between experiments.
To rule out any problems on VESNA I rigged up a simple test where I compared the difference between VESNA's real-time clock and the clock provided by the Linux kernel with a running NTP daemon. This is the result of four test runs with two nodes that we were using in Munich:
As you can see, the clocks do drift apart and do so more or less linearly (nodes were turned off for the night before each test to also see if warm-up affected the drift). Node 117 drifts at around 3 ppm and node 113 at 9 ppm. This is actually quite bad, as node 117 would gain a second each four days, even considering that the reference here was a laptop that probably isn't a shining example of accuracy either.
The only weird thing is the strange bump on the graph for node 113, where drift was positive for around 15 minutes and then turned back to negative. I could not reproduce it in any of the later runs and it might be due to NTP adjusting the laptops clock. But even that can't explain the observed deviations that are more than ten times what we see here.
While this is of course not conclusive, it does give me some confidence that it might be the PC software that was causing these problems.
Recently I got another gadget to play with from Farnell as part of their road testing program. If you remember last year I picked a pair of 433 MHz RF modules, which I put into some good use.
This time I choose something a bit more colorful: a small, passive matrix OLED display, not unlike the screens you find on small solid-state music players.
This is Densitron DD-160128FC-2B (original manufacturer's site). It's an 1.69 inch color dot-matrix display using organic LEDs on a glass substrate driven by a passive matrix. This means that you get 160 x 128 pixels with 18 bits of color resolution in a viewable area of around 36 x 29 mm (making for 110 DPI).
On the electronic side it's controlled by a Syncoam SEPS525 driver IC. By the way, that's the elongated bar on the front side of the display bonded to the flexible PCB tail - you can actually see the structure on the silicon through the transparent substrate. This chip supports quite a few possibilities of interconnect, although not all are usable with this particular display since some pins of the chip aren't available on the connector.
Probably the most interesting one is the single channel serial 10 MHz SPI bus which requires only 4 pins. You also get the possibility of using an 8- or 9-bit parallel bus in an either a Motorola 6800 or Intel 8080 flavor, which is somewhat surprising since I haven't seen a bare CPU bus exposed on highly integrated devices that would use a display like this. More likely these modes can be used through some kind of emulation of these CPU buses as they allow faster writes to the frame buffer compared to SPI. The fourth possibility is a direct RGB interface that uses a dot clock and h- and v-sync signals to transfer subpixel values through a 6-bit parallel bus. The chip also supports parallel buses wider than 9 bits, but as I mentioned above, they appear to be unusable, as data lines D9 through D0 aren't connected.
My plan is to put this display on a shield for an Arduino Duemilanove, as it is just about the right size to fit nicely. And here's the catch that is the reason why so far I'm writing all of this from theory and I haven't yet actually turned the display on.
First, the OLED display requires a 14 V, 40 mA power supply. This is not something you will usually find on microcontroller boards which means using a micropower step-up converter. Thankfully, these requirements are quite similar to those of LCD panels, which means that you can get cheap DC/DC converter chips in this power and voltage range (I'm currently looking into National Semiconductor LM 2703). To complicate things just a little bit more though, you apparently need to be able to turn this voltage on and off from software, since the datasheet specifies a turn-on sequence that involves turning on the driver voltage some time after the digital part gets its supply.
The said digital part however, only works with supply voltages between 2.8 and 3.3 V (with I/O pins capable of also using 1.6 V digital levels). This presents another inconvenience on a 5 V board like the Arduino, since you have to do level shifting for digital lines. Thankfully, there's a 3.3 V supply already available on the Duemilanove shield connector that should be capable of powering SEPS525.
Lastly, the display sports a flat flexible cable with a 0.5 mm pitch. Even with a matching SMD connector this is quite impossible to manually solder without a PCB break-out board that puts somewhat more space between the pins. 0.5 mm pitch (I'm guessing requiring minimum feature length around 6 mil) is also somewhat stretching the limit of what I can do with my home-brew PCB process. I have yet to try it, but it's not unlikely I will have to get this board professionally made.
In conclusion, none of the problems I mentioned above are show stoppers if you want to use this display. However they make it quite inconvenient to use in a home workshop or with the cheaper microcontroller development boards and I haven't even started looking into the software side of the things. Guessing from the datasheet talking with SEPS525 won't be trivial either and from some initial searching I haven't been able to find any free libraries that support it. So, unless you are looking for the flexibility that only a bare display can offer, I would suggest trying some of the display modules that already come with all the tidbits required for mating them with a microcontroller development board.
I am a long time user of NoScript Firefox extension. I find it's an effective cure for obtrusive advertisements and weird page features, plus it gives me the feeling that I can still control who can execute code on my computers. I also hate being tracked by various bugs embedded in pages and NoScript allows me to block those iframes and scripts that send information about my visit to third parties.
Unfortunately NoScript took a path all to many software projects take. From doing a simple task of blocking script execution it grew into a giant that wants to solve all of the browser-related security problems. That by itself isn't such a bad thing, but such complexity invariably leads to problems and it's those that are starting to annoy me.
For instance, NoScript nowadays comes with some heuristical algorithms for preventing XSS. As far as I know, they have yet to save me from malicious content, but are constantly breaking legitimate scripts like Instapaper, even when I put it on all white lists I can find. Same goes for something called ABE, which constantly prevents me from following links to servers on my local network. Again, it might prevent attacks against my local routers, but security that constantly gets in your way is worthless.
So I decided to rather donate some of my time to help with that last problem. I downloaded the complete history of stable and development NoScript releases from addons.mozilla.org and committed them to a GitHub repository. Using their API I also set in place a mechanism that will automatically update the repository with new releases, hopefully with minimal maintenance from my side. I also added a simple script that can be used to create an XPI file from code in the repository that should be nearly identical to official releases (except for author's cryptographic signature, of course).
As usual, you can check it out with a command like this:
$ git clone https://github.com/avian2/noscript.git
This also has a useful side effect in that it makes the original upstream development somewhat more transparent. With a diff between two releases one click away on GitHub, you can check the changes between two releases yourself. With a tool like that undesirable changes like NoScript messing with Adblock Plus back in 2009 might have been discovered earlier.
And finally, having all of NoScript history in git means you can easily create nice graphs in a few key strokes (courtesy of gitstats). Enjoy.
Number of releases of NoScript per month (note that for releases earlier than 2007 I don't have information of their exact date hence the spike on the graph)
Number of files in NoScript XPI through time
Lines of code in NoScript through time
I've spent past week just outside Munich, Germany at the Ottobrunn EADS site. Among other things at the Jo?ef Stefan Institute I'm also involved in the CREW project. As it's usual for such multinational projects they organize regular meetings at one of the partners and this time the turn was on European Aeronautic Defence and Space Company.
The CREW project aims to build a number of instrumented test sites across Europe that would allow experiments with advanced radio technologies, like cognitive radio and dynamic spectrum access. I'm currently working on spectrum sensing hardware for VHF and UHF TV frequency bands and even though the happenings this week didn't directly concern that, I couldn't pass on a visit to the company that makes things like Airbus aircraft and Ariane launch vehicles (in addition to some less enlightened products). Not to mention that it's always nice to put faces behind names appearing in your inbox.
If you've been on a passenger airplane you probably know little panels above your seat that remind you to fasten your seat belt and have speakers for announcements from the crew. Behind the curtains these are currently connected to the airplane's network with cables, however airlines wish to do away with physical connections and use radio instead which would make the panels and seats below easier to rearrange. Of course, this brings problems since these panels are considered critical to passenger safety and must work even if some ignorant passenger turns on Wi-Fi on his iDevice or if a terrorist smuggles 10.000 EUR worth of signal generators on board.
This is where this group of experimenters came in. We were given a section of an Airbus 340 fuselage to work with. It happened to be furnished with first-class seating and plenty of power sockets in the floor panels, something that's sadly not part of the usual airplane equipment. After two days of setting up measuring equipment the place was so full of criss-crossed wires that it was somewhat hard to believe the objective of study was in fact wireless technology.
In a nutshell, we set up two programmable 2.4 GHz transmitters at opposing corners of the cabin and then measured received power at different locations across the cabin, including one mobile one on the meal cart, with different custom (for instance the IMEC sensing engine) and commercial receivers (USRP among others). The end result, at least on our end, was a bunch of spectrograms, like the one below that was recorded with two VESNA nodes equipped with our spectrum sensing receiver.
If you think this is something you might show the stewardess when she next asks you to turn off your favorite gadget I must disappoint you. You will have to wait for the peer-reviewed papers that will undoubtedly be published about the experiment and even then I'm not sure you will win that argument. In that regard I am not very optimistic about the usefulness of results myself. You might do some qualitative discussions of what the cabin radio environment looks like, but I think that's about it. For anything else there are just too many unknowns even in just the physical setup of the experiment. The conclusion that cognitive radio approach will be superior for such an application was somewhat unscientifically determined before the experiment took place anyway, so you can't say that it was an unbiased effort either.
The real value of this work however was in testing all of our tools. We found several problems with VESNA nodes that need to be fixed before we deploy them in our CREW test bed, from unknown sources of clock drift to buggy software, inconvenient procedures and unfavorable reactions to lengthy USB cables. We also found sources of interference on some devices that need to be looked into and so on. Suffice to say, my wish list grew a bit over the week.
The short tour of EADS offices was quite interesting too, of course. It's a huge facility with 3.000 employees and a correspondingly large amount of amazing stuff hanging on walls and laying around workshops and hallways. Sadly the strict security didn't allow me to take pictures outside of our experiment, so I'm not able to share them here but I can say that the shapes behind frosted glass and warning signs on closed doors do well to fire up your imagination.
These have been eventful couple of days for web developers. CSS Working Group chair called on everyone to use all (most?) vendor prefixes and stop making websites for WebKit which is becoming a new (mobile) IE6. Responses have been numerous, including ppk who in his usual obnoxious manner <sup></sup> made some good points. Testing on mobile devices is an unsolved problem (who wants or can afford to buy so many almost immediately obsolete gadgets?) and introducing -beta- (maybe also -alpha-) prefix would simplify our lives while keeping most benefits of vendor prefixes.
I like -beta- idea and I think adding -alpha- might be even better. There’s still a problem of resolving syntax conflicts between different implementation which I think has a simple solution that closely mimics what browsers already do:
When parsing CSS browsers should apply the last matching -beta-/-alpha- directive they fully understand.
Browsers already ignore directives they don’t understand and they apply last directive found when there are multiple candidates for a DOM node.
Such behavior would give us less CSS code to write and maintain, have predictable behavior and keep browser experimentation without favoring one. I have troubles finding negative sides of this approach, but do let me know if you can think of one.
- I deeply dislike his complaining about simplistic view of others while himself generalizing and name-calling (the lazy and stupid lot of us). Alas it’s not good to read only people you like and agree with.?
- Almost everything I write on this blog has an intended audience of one: me (no, really!). Why I sometimes write posts like this, which don’t, is a mistery since their expected and actual effect on anyone is…none.
Warning: this whole post is not much else than a series of speculations and amateur psychoanalysis. If you can?t find fun in that, well, then start reading something else.
I started using IRC almost two decades ago, soon after I came on Internet. I still do, but it?s now a very different experience mainly because today almost everyone is connected. When everyone is around, you tend to hang out largely with people you already know. Back then I chatted with faceless handles and what I found especially interesting were strong feelings and a sense of familiarity that developed between people who would never meet.
I thought of this recently again while discussing appeal of Tumblr and a neologism that I like ? tumblrcrush. It wasn?t explained, but I understand it as having a crush like feeling provoked by a Tumblr blog.
I never heard of something like that related to WordPress although I am sure it happens. But I feel safe in hypothesizing that such visceral affection for a blog and by proxy for its creator happens more often on Tumblr.
Now, this is surprising on surface because so many Tumblr blogs look like nothing more than collages of other people?s stuff whereas old school blogs often have more what is disgustingly called original content and are more verbose ? just like this one. It wouldn?t be unreasonable to expect that writing at length about things that interest me would reveal more about who I am then things I collect. After all I am more likely to divulge facts about me through my own writing than through other people?s work.
However when I write, it?s not really me who does it. Writing, even when trying to avoid self-censorship (unsuccessfully), engages a different part of a brain than responding to an image or a passage of text. I write so I can think, but even when not, I don?t just type a Joyce-like stream of consciousness. I form sentences I would prefer to utter, but usually don?t.
The genius of Tumblr (even with some serious interface screw-ups) is that it makes it easy to republish found stuff and really inviting to do it. Those pieces shared and reshared are revealing exactly because they were created by others. They never had time to be distilled and redacted closer to our self-image because they weren?t selected to represent us. Instead they are mostly curated by finder?s emotional response and its those emotions, part of finder?s subconscious (soul), that sometimes touches us.
Because what does it really mean to know someone? We may admire intellect, but we relate to the person. We don?t know a person until we empathize with her and those small shared bits are conduits for feelings, not information.
This doesn?t mean that you can?t write long, elaborate posts on Tumblr. Many indeed do. Just like many use WordPress to post stuff they found in some web back-alley. But it is Tumblr?s whole fun (and) social experience ? unlike a serious, CMS-like sterility of WordPress ? that nudges you into a different behavior. In creating we are guided by our tools with what they suggest, not what they make possible.
Recently I had an opportunity to see development labs of a few local hardware shops and chat with people working there. In addition to my recent work at the Jo?ef Stefan Institute it made me realize that, for someone familiar with development practices in the world of free open source software, the field of professional electronics development is quite lacking in some regards.
Interesting. The software world has long ago established a need for revision control systems, code review tools, bug trackers and release procedures. And rightfully so - as I often rant about here, most software is so complicated today you are basically forced to use of such tools to achieve any kind of collaboration or a usable quality level. While it's true that too often good practices are overlooked in commercial settings and deadlines and marketing given priority, at least most developers will tell you what is the ideal they strive for.
Compared to that it was somewhat striking to hear that manual checking over multiple-sheet schematics for differences is common practice. Or that proper revision control systems aren't required and that keeping multiple folders of versions was all anyone would ever need. Or procedures for making fabrication documents, equivalent to a software release, that take too many minutes of clicking various settings in a graphical interface.
Agreed, most electronics development done in small groups and companies is simple enough to involve only one person. It also involves simpler designs and nowhere near the layers of abstraction present in software. But on the other hand mistakes are much costlier here. If a botched software release these days means uploading a new package or even just updating some scripts on your web server, a broken prototype hardware design sent to manufacture wastes much more tangible resources.
One thing software developers learn is not to trust themselves. Any repetitive tasks should be scripted and should involve as few and as simple steps as possible. There should always be ways to double check your or your colleague's work. After a long work day I don't trust myself that upon saving a file I only made the changes I wanted and didn't accidentally drag an odd trace out of place while I was experimenting with layout. I don't believe that I won't forget to check that critical check box in the GUI the tenth time I will be exporting Gerber files. And most of all, I will never be sure that when comparing two PDFs centimeter by centimeter and line by line I won't miss a difference.
Not surprisingly, voices to improve this situation are being heard as open hardware philosophy becomes more widespread. Some of the blame here certainly goes to the lack of tools that would allow such processes. It appears that if you want to have them out of the box, you either need to go for some very expensive proprietary software aimed at large teams and huge projects, or funny enough, free software tools like gEDA.
More concretely, CadSoft Eagle, the software most used for small projects and not that uncommon also in the open hardware community because of its low cost, has a distinct lack of such capabilities. Having just finished my first design in it and finding out all of the above, I tried to change a few things for the better.
Hence, eagle-automation. It's a small collection of Python scripts that currently allow you to do just two things:
- Make a set of fabrication documents from a single Makefile without clicking on a single dialog (with credits to Andrew and his blog post).
- Do visual diffs between revisions of schematics and board layouts stored in git, like the one shown above
The installation is as simple as the usual Python setup.py install dance (you need Python Imaging Library) and the git integration uses git-difftool which is present in recent versions of git. For the rest I refer you to the README file. Apart from finicky Eagle scripting it's quite simple actually.
So, most of all, I hope this will make open hardware development easier and more reliable. I will still strive to use open tools like gEDA as much as possible - with which, by the way, I never had much use for visual diffs because of its human readable ASCII file format - but I also understand that a lot of people are stuck and will be stuck for some time behind proprietary tools. But I don't see that as a reason why lessons learned the hard way by the software world should not be reused.
I noticed that a lot of talks and presentations I attend, especially in the more academic circles, begin with the speaker showing a slide with the outline of the talk in form of a bulleted list and talks through it, explaining what he will talk about. With short talks this can actually take a significant part of the total time, especially if the speaker returns to that slide after each part, showing how far into the talk he progressed.
What is the point of that, short of giving the audience an opportunity for one last glance of their email client? The purpose of such an overview in printed articles is to give the reader some idea of the contents. This means that you can skip reading it altogether and move to the next article if the table of contents doesn't give you enough confidence that the article will be of use to you. Via page numbering it also allows you to skip directly to the interesting part, should only a part of the document be useful.
None of these uses apply to the spoken word though. You can't fast forward to the interesting bit and if you find that the talk won't be worth your time after the introduction it's usually considered quite rude to walk out at that point. As is browsing the web waiting for the part the interests you.
Some may argue that the table of contents makes the slide stack more readable in stand-alone form. I don't buy that - slides are part of the presentation and I don't think any consideration should be given on how useful they might be without the accompanying speech. It's 2012 after all and making at least an audio recording with cheap equipment shouldn't be science fiction anymore.
Photo by lowk3y
Last Tuesday I gave a talk about my 433 MHz receiver project at Kiberpipa's Open Sessions. I presented what I learned about the radio communication between cheap, simple everyday devices, described the receiver hardware and software and did a live demo.
I think the talk was well received and although I didn't go much into the security consequences of some of my finds the Q & A session that followed made it clear that it wasn't really necessary.
A few minutes ago I also uploaded the version of the software that I was using during the demo. Compared to the 0.0.1 version which I put on-line during the 28C3 it contains some minor fixes for the software demodulator and a bootstrap script that (hopefully) compiles everything in one step.
If you missed the talk, the video recording of it is already on-line thanks to the efforts of the Viidea team. You can either download it or watch it in the web browser synced to high-res slides at Kiberpipa's shiny new video archive.
Update: in the talk I mentioned that the CM108 audio chip doesn't allow for longer than 24 hours of continuous recording and blamed either a bug in the chip or ALSA drivers. It turns out this was a bug in my own code (which is now fixed in the git repository). I guess I should count the bits in my own variables before speculating on the counter size of others.
28th Chaos Communication Congress is much more than just the things that are neatly listed on the Fahrplan. If the safety personnel would allow it, the halls and walkways would be filled to the top with all sorts of more-or-less identifiable hardware, interspersed with empty bottles of a certain caffeinated drink. Some of it was there just for display or for sale in kit form, some was being actively developed or disassembled and some was a target for scheduled and ad-hoc workshops. Sharing ideas by participating in this chaos is a big part of the congress.
Hacked Brother KH930 knitting machine by Fabienne
This year I had the feeling that there were many more low-level hardware projects going on than last year. As with previous events there was a large hardware hacking area in the basement, but portable oscilloscopes, multimeters and soldering irons were a common sight also in the hackcenter and the hallways. Arduino workshops were absolutely packed and I escaped from the room each time they started to run away from the crowd. The well-known Linux distributions and other free software groups occupying their usual places at the Berliner Congress Center almost felt pushed aside.
The r0ket badge, introduced at the CCC camp, was back. The version 2.0 with some minor improvements was available half-assembled at the info desk for 30 ? and was sold-out in minutes. There was also a new official firmware image available with which you could play Tetris on a big green LED display at the hackcenter and control a few other things around the congress. In combination with the tiny joystick that practically guaranteed a sore thumb for everyone.
Speaking of sore body parts, a special kind of attraction was the PainStation, an exhibit from the Computerspiele Museum. It's a Pong clone where missed balls result in pain being inflicted on your hand through heat, electric shocks and a mechanical whip. Since you had to sign a disclaimer to play with it I guess it wasn't very forgiving. I didn't give it a try under the excuse that I'm already being shocked frequently enough in my profession. Considering that in the end one third of medical emergencies at the congress had a connection to this machine maybe that wasn't such a bad idea after all.
Jure and I spent some time trying to find a problem that this little device would solve. It's a TP-Link TL-WR703N, a tiny portable wireless router that is capable of running OpenWRT. For around $20 you get a MIPS system with 4 MB of flash and 32 MB of RAM storage, 802.11 radio, wired Ethernet, USB, serial and at least one GPIO port. Certainly something worth considering when the next idea comes around that needs Internet connectivity in a small package.
Next on the list of tiny things is the MC HCK, a very basic open hardware ARM Cortex M0 board that is little more than a microcontroller on a PCB that fits into the USB socket. I thought it's hard to get more basic than an Arduino, but this certainly proved me wrong. Simon (who also let us borrow the TP-Link board above) is trying to get it down to $5 per board.
I should also mention, that I had a nice chat with a developer from the Sigrok project. They are developing a portable logic analyzer software that works with a wide range of different hardware devices. I've been missing a good logic analyzer and I got some good advice on which hardware capture device works best with Sigrok.
Congratulations go to the Network and Phone Operation Centers. Wi-Fi was working surprisingly well this year and I mostly didn't have problems keeping a link up for IRC chat or an occasional website load. It was slow but stable except in Saal 1 when it filled up. Eventphone GSM network also worked the few times I attempted to use it. Only an occasionally a General error popped up on my N900. The only complaint would go to this year's Wiki, which had a somewhat unusable theme and was down a lot.
By the way, I learned that stability of the 802.11 link in this environment depends largely on what drivers you use. On my aging Eee 901 with the Atheros chip for instance, the ath5k driver that comes with recent kernels can barely keep the link up for a few seconds before dropping it and forgetting all the iwconfig settings. On the other hand the old madwifi worked almost perfectly.
All summed up, this was one of the best congresses I've attended. There was always something to do and it never happened, as it did occasionally at the previous events, that there wasn't an interesting talk on the schedule or interesting people to talk with. Unfortunately I didn't manage to prepare a lightning talk about my 433 MHz receiver project before the trip to Berlin and once I was there everything went by so fast I didn't even manage to finish my slides.
Again, thank you CCC and all of the Angel volunteers for the wonderful event and see you next year!
I’m not sure how many years are necessary to call something annual, but three are probably enough. So here’s my annual list of books I read in 2011. I typed “this year” first. Obviously celebrating new year is not enough for my mind to completely switch and I’ll probably be in danger of writing 2011 where I shouldn’t for another fortnight or so.
This year I stopped striving to read a certain number of books and I’m happier for it. Looking at the list I am mostly happy with my choice of fiction and seriously question my choice of books related to my career. It’s not that books themselves would be that bad (most weren’t), but why on earth did I spent so much time reading things tangentially related to what I do?
I linked every book I liked and bolded those I heartily recommend. To safeguard from my tweaking of this blog’s theme, here are recommended spelled out: Eating Animals, Herztier, Skylight and The Unfolding of Language. I also want to specifically mention Copper and Atlas of Remote Islands both great in their own way which are not bolded but maybe they should be.
Last year?s disclaimer is also still valid. Most links point to Amazon and include my affiliate ID meaning if you buy them after following these links, I get few cents that might eventually lead to purchase of another book. It was easier to copy this paragraph than fix my scripts that generate links to books, but I should probably do later. I promise to fix it for next year’s list, especially since I’ve never earned enough to buy even one book.
Without further ado here is the list:
- Eating Animals by Jonathan Safran Foer. Great book that you should read even if you have no plans to become a vegetarian (or vegan).
- The Cello Suites: J. S. Bach, Pablo Casals, and the Search for a Baroque Masterpiece by Eric Siblin. Who knew there is such a fascinating story behind cello suites and Eric tells it very well.
- Hardboiled Web Design by Andy Clarke. I wish I didn’t have to read word hardboiled so many times, but otherwise a must-own book for every web front-end developer.
- The Yacoubian Building by Alaa Al Aswany. I read this at the end of January, when Egypt was engulfed in protests against regime and it provided an excellent background to why protests were happening. A truly captivating look at modern Egypt society.
- Buying a Fishing Rod for my Grandfather by Gao Xingjian. Six beautiful vignettes.
- Epitaph for a Spy by Eric Ambler. I don’t read many mystery books, but I read this one with pleasure. It aged remarkably well.
- Introducing HTML5 by Bruce Lawson, Remy Sharp. A good, hands-on introduction to HTML5 and related technologies that occasionally forgets to mention browser support and is already somewhat out of date. Luckily second edition is out (or should be soon).
- Atlas of Remote Islands by Judith Schalansky. A beautiful book. A delightful way to spend a day reading about mostly unknown islands.
- HTML5: Up and Running by Mark Pilgrim. I read this one almost back to back with Introducing HTML5 and there is surprisingly little overlap between them. It can feel too geeky and repetitive in places, but it has best treatment of data-* I’ve seen so far.
- Life Nomadic by Tynan. A book of anecdotes and lots of practical advices aimed at digital nomads, which has surprisingly large amount of useful tips for us who don’t travel as light or for as long.
- Herztier by Herta Mueller. Without a doubt one of the best books I read in years and also one of the most heart wrenching. Literature at its best.
- Skylight by David Hare. Great drama with amazing amount of substance. Probably not for you if you are politically right (wrong) leaning.
- The Elements of Content Strategy by Erin Kissane. Good overview which confirmed it’s not a problem I’d have. Or career I’d choose.
- EffectiveUI by EffectiveUI. Overview with some concrete propositions written for audience that doesn’t include me.
- 97 Things Every Programmer Should Know by Kevlin Henney. Mostly uncontroversial and insightful often enough to be worth at least a quick browse.
- 97 Things Every Software Architect Should Know by Richard Monson-Haefel. Kind of like the previous one, only for software architects.
- Confessions of a Public Speaker by Scott Berkun. A fun book with lots of good tips which still can’t make you a good speaker without practice.
- Think Stats by Allen B. Downey. I read this book too soon, before it was published. Content was promising, but code was sloppy and buggy. Most useful to (Python) programmers who want to learn basic statistics.
- Dry Side Up by Martin Ony. I know Martin so I may be biased, but I also don’t like travel literature. This is the book I would get if I wanted to know how it FEELS like to raft through Grand Canyon. Fun to read even if you don’t have such plans.
- Snuff by Terry Pratchett. Felt a bit sentimental and certainly not among his best, but I nevertheless enjoyed it as I’m sure most fans will.
- The Chains of Heaven: An Ethiopian Romance by Philip Marsden. Interesting, insightful, fluid, but still travel literature. I liked it as much as I can like travel literature.
- The Inheritance of Loss by Kiran Desai. A rather uneven book that doesn’t quite live up to praise it received. Sometimes brilliant, often not, but almost always tragic.
- Copper by Kazu Kibuishi. A melancholic comic about a boy and his dog that fits my temperament perfectly. Most, but not all, cartoons can be found on web, yet book is still worth buying.
- The Unfolding of Language by Guy Deutscher. Fascinating and well-written book about how human languages did and might have developed. Can’t recommend highly enough.
- 23 Things They Don’t Tell You about Capitalism by Ha-Joon Chang. A must read for free-market proponents and less so for well-read people. It contains some surprising information for everyone and is easy to read. I wish though there were more links to data in notes.
This year I plan to finish a few tomes I already started but for one reason or another put off. I find I read too much from “Anglo-Saxon” authors so I want to read even more books from, at least to me, less known places and cultures. Also more directly work-related books would be great too. Any recommendations of books you liked are most welcome.
It's been two days since I returned from the 28th Chaos Communication Congress in Berlin. Enough I guess to recover my sleep cycle and detox from Club Mate and other caffeinated drinks. Those were also the primary reasons why I didn't feel capable of writing a coherent blog post about the happenings inside the Berliner Congress Center during the congress. However, I do have a ton of notes and I'll try to share my thoughts on the congress in a few retrospective posts.
The best way to start would probably be at the talks. Two of those have circled the web and I can't add anything that hasn't already been said about them: How governments have tried to block Tor by Tor project developers Jacob Appelbaum and Roger Dingledine rightfully received a standing ovation while Cory Doctorow's The coming war on general computation had a fan-made transcript within hours. Both are well worth a look, including the Q & A sessions that followed.
GSM and mobile phones remain in the focus of security researches and reverse engineers. Karsten Nohl released a set of tools for assessing the security of calls made through your local mobile operator and an IMSI catcher detector. Both require only an OsmocomBB-compatible phone and I would love to see how Slovenian operators score on the former. There was also a very interesting talk by Guillaume Delugré on Reverse-engineering a Qualcomm baseband. In a flawless demonstration he showed how he managed to inject a GNU debugger compatible interface into a proprietary real-time OS running on the baseband processor inside a USB 3G dongle. We might soon see a OsmocomBB equivalent for UMTS based on this hardware.
Outside of the limelight there was the usual spectrum of talks on all sorts of topics. Continuing the CCC camp's hackers in space theme there was the unveiling of the new lunar rover by the Part Time Scientists team. The work they are doing on their hardware is impressive to say the least, however the presentation they gave was somewhat poorly prepared. Inviting questions from the audience with cheap give-aways might work on disinterested college undergraduates, but it just looked silly with this crowd.
Anyone dealing with wireless networks will probably be interested in Packets in packets talk about how the noisy nature of a radio link can be exploited to attack security of low level code even if the attacker only has access to protocols further up the OSI stack. And talking about security, Peter Eckersley of Electronic Frontier Foundation presented their Sovereign Keys proposal for fixing the current, broken situation regarding SSL certificate authorities.
Old home computers are still a popular topic, as proved by the Atari 2600 and the Commodore 64 demo talks. So is Bitcoin, although I haven't heard anything about this electronic currency I haven't seen before.
Leaving computers aside for a moment, there was also an interesting talk about Eating in the Anthropocene, which had a refreshingly rational approach to the topic of genetically modified organisms. These are usually automatically considered evil, even in the population frequenting this kind of events.
On a similar note, I should also mention something that happened on the final track of lightning talks. One of the speakers ringed all the bells of a new-age pseudo-scientific nonsense. While the IRC channel immediately exploded with skeptical remarks, the real-life audience actually patiently waited for the end of the four minute slot. A few people then gave a courtesy applause and the rest of us expressed our disagreement. Nick Farr, moderator for the session and otherwise a very respected member of the congress organization team, scolded us for not respecting the speaker's effort and gave the speaker an extra minute that was otherwise reserved for well-received talks. I think speaker was shown enough courtesy by giving him an equal opportunity to make a convincing case for his negative-ions-atmosphere-fertilization thing. Giving him extra time in my opinion showed lack of respect for all of the other speakers before him that presented more sensible topics.
Finally I should also mention there was an unscheduled panel on depression, motivated by the recent suicide of Diaspora developer Ilya Zhitomirskiy. It focused mostly on personal stories, but the IRC discussion it triggered raised some questions I would very much like to see discussed more in-depth, like how much is depression correlated with the hacker culture and motivations behind it.
This more or less covers the talks I attended and found worth sharing. Of course, you can find the whole list of talks, plus official video and audio recordings, on the Congress Wiki.
This year quite a few of the more security-oriented technical talks moved to a smaller, parallel event called BerlinSides at the other end of the city. I would certainly attend a few of the talks scheduled there and some actually choose to hop between the two events. However in hindsight, I didn't miss them at 28C3. The better selection of talks meant less of the bad feeling that I'm missing interesting presentations in the upper floors and more time socializing and doing other awesome things in the hackcenter. But more about that in the next part. Stay tuned.
It’s hard to judge importance of events as they are unfolding or soon afterwards, but 2011 feels in many ways like a historical year and I will be amazed if it doesn’t prove to be such. Arab spring and unresolved eurozone crisis alone practically guarantee that this will be a year to remember although it’s too soon to say in what light. There are definitely plenty of good reasons to worry, but there always are.
This year was also a year of big personal changes. I left Zemanta in summer and went on sabbatical. Leaving a company in the middle of economic crisis doesn’t sound like a prudent idea, but I’ve yet to regret it. I started contributing open source before I left, but having more time certainly helped me do more of it and with a bit of luck I may even finish first version of my web Instapaper client before this year runs out.
I built Supervizor with Primo?, which is a project I am most proud of. We haven’t done everything we set to do yet and we unlikely will this year, but I hope that in early 2012 Supervizor’s data will become easily accessible.
There were lots of experiments on myself. Having a list of possible (and finished) projects on wall feels liberating. Counting books I read doesn’t work, but digital sabbath is great. I left Twitter for better use of my time, but it’s really too soon to tell. I learned and did a lot, more than I expected, but as always less than I hoped. Somewhat unintentionally we reduced our carbon footprint further even though we traveled a lot.
I also finally found time to really reflect on what I want to do and my personal research agenda is getting more of a shape. I want to work most on use of open data to improve civic engagement and exploring social software for introverts.
So what will 2012 bring?
Nobody really knows, but this year it became clear to me that I want my work to be more socially engaged and I am thrilled I got an opportunity to join amazing people at Aptivate for I hope a long time. I’ll continue to work on a so far semi-public open data project which I am sure will become public soon. I plan to read lots of books, but start less than two new projects. I am sure I won’t finish everything I set to do, but that’s alright. We always overestimate what we can do in a year and underestimate what we can do in ten so I just need to keep going.
And surely I will continue to worry. There are only more reasons to despair over environment and eurozone of today might not be around this time next year. However we are not pilling on canned food yet so optimist in me obviously hasn’t completely died yet.
But first I will enjoy holidays. I hope you will too and I wish you a happy new year.
I got a few questions about what I'm working on at my new job at Jo?ef Stefan Institute, so here's a short story about that.
Approximately a month and a half ago I joined the team working in the laboratory for wireless sensor networks, or SensorLab as people around here call it. They have been developing their own software and hardware for a year or so and have come up with a impressive collection of tools for gathering data from a large network of small sensor nodes.
At the core of their efforts was the VESNA platform, a wireless sensor network node with cute female name from Slavic mythology. It's a small, modular microcontroller system built around an ARM Cortex M3 chip from ST microelectronics. At the center is a core board with the CPU, non-volatile storage and power supply. Then there's a connector to the right that connects to one of a collection of radio modules that take care of different communication needs. Finally there's an Arduino-like general purpose shield connector on the top and bottom that can carry application-specific expansion modules, for instance with specialized hardware for sensors that cannot be served by the general purpose IO and instrumentation amplifiers on the core board.
When you are mounting hundreds of such boards in places that were never meant to carry any electronics, getting power to each sensor gets problematic very fast. So one of the main features of VESNA is power provisioning. In addition to requiring very little power to begin with, the core board carries a very versatile power supply that is capable of efficiently harvesting power from a solar cell or a range of external voltages. You can also connect a rechargeable battery to it and it will weather through nights or other power shortages.
Since you can't yet depend on having UTP cables at each and every place, communications are mostly based on radio (hence the wireless in wireless sensor networks). Radio boards are built around several multi-purpose chips that can use a number of regimes in the ISM bands, from proprietary schemes to IEEE 802.15.4, from star topology to mesh networks.
On the software side there's also a lot of interesting things happening. Guys and girls at SensorLab have been developing their own C library for VESNA peripherals, which you can use to compile and run bare-bones programs on the CPU. For simpler applications there's an Arduino compatibility in the works while heavier applications might use a port of the Contiki operating system, which allows you to access sensors through web-friendly REST interfaces on a 6LoWPAN network.
An the best part of it is, most of the things I described are going to be released under a free, open source license in the following year, since we are hoping to build a lively open hardware community around this project. The internet of things is the new buzzword you know and there are plenty of IPv6 addresses to go around. We think VESNA will help you make good use of at least a small part of them.
The easiest way to remember visitor’s preference is to store it in his browser. Cookies used to be popular before they were deemed evil, but they have other limitations as well. Hence popular switch to HTML5 in-browser storage technologies like localStorage.
I think there is one important difference between cookies and tools like localStorage that is often overlooked and it’s not the size of data that can be stored. Cookies are sent with each page request while data stored elsewhere isn’t. Changing them on any side will automatically share state with the other. I use localStorage in my theme switcher because I think server doesn’t need and should not know which theme is used. But for storing shared data, especially one that expires, cookies remain a reasonable if not best choice.
None of this is exactly new, but I think it is worth remembering. In other news I dislike interface limitations of Chrome more and more (exceptions are Developer Tools and extensions framework).
It's almost that time of the year already. You know, to chat with fellow hackers over a bottle of Club Mate, pile up a fresh amount of sleep deficit and draw a line under this year's happenings at talks and workshops.
Yes, this means that in exactly one week I'll be flying to Berlin to attend the 28th congress together with the same group of guys from Kiberpipa as last year. This year I hope to bring a bunch of portable radio hardware with me to test and experiment with: the 433 MHz sniffer is certainly going into the bag, as well as the Funcube dongle and most likely also a handful of wireless sensor hardware I'm working on at my new job.
Perhaps I'll manage to schedule a lightning talk about some of that stuff. Anyway, if ISM band transmissions hacking sounds like fun to you, give me a call. In addition to all the usual means of communication I'll also be accessible via the 9-8087 extension on the Eventphone GSM network.
Communities developing modern open source and free (as in freedom) software now have more than 30 years of experience behind them. During this time software licenses like the GNU General Public License, BSD license and a small handful of others have seen such wide use that they became de-facto standards when it comes to releasing code to the public. Although there are still some gray spots regarding legal issues, as a document from Free Software Foundation Europe nicely explains, the community has developed some stable views on what is acceptable and what isn't under those licenses.
Logo by Mateo Zlatar
The situation is somewhat more complicated in the field of hardware. Hardware mostly falls outside of copyright law that makes software licenses possible. There is a fuzzy line between uploading a RTL design to a software-programmable circuit and etching a printed circuit board based on a schematic, but generally imposing restrictions on manufacturing and distribution of physical objects falls under the domain of patent and trademark law. While in most countries the author of a creative work is automatically granted copyright free of charge, patents and trademarks that would allow you to set any specific rules for use of your design documents are neither automatic nor cheap.
If you look around, practically all popular open hardware projects ignore this issue and simply apply either a software license like GPL or a Creative Commons license to their design documentation: for instance, RepRap decided on GPL, while Arduino uses Creative Commons Attribution Share-Alike for their board designs. This doesn't always have the desired effect though. For instance when someone sells products based on a modified design under GPL, he doesn't need to provide the modified documentation nor credit the original author, since he is not distributing their copyrighted works.
There are attempts to address this issue. First of all, there is now a draft of the Open Source Hardware Definition that aims to be what Debian Free Software Guidelines are for software, outlining the basic requirements that need to be satisfied before a design can be called open source. There are also several license texts that attempt to focus specifically on hardware projects. Ones that appear the most complete and widely used are the TAPR Open Hardware License from the Tuscan amateur packet radio group and the recently released CERN Open Hardware License from the European organization for nuclear research.
Both of these, however, contain some terms that might be surprising for anyone used to software licenses. For instance, both require you to make some reasonable effort to contact the original author or authors when distributing derived works or manufacturing products based on their designs. While a nice provision for one-man efforts, this puts an upper bound on how large a project using this license might become. Imagine trying to contact all the contributors to the Linux kernel today. TAPR has even more terms that limit its scalability. For instance, it specifically requires attribution on PCB artwork, which can waste costly PCB area. It also requires you to distribute "before" and "after" versions of all modified files, which can again become quite unwieldy when you have to distribute the whole history with every copy.
On the other hand, what I miss in these licenses is stronger terms for practical modifiability. While GPL clearly defines that source code must be in the preferred form of the work for making changes in it, TAPR merely says that you must also include open format versions (such as Gerber, ASCII, Postscript, or PDF) if your tools can create them. The openness of the format does not however make the design easily changeable. In fact, all of the listed formats can be compared to a compiled binary in software world. CERN license makes no such constraints at all, but then, neither do Creative Commons licenses, so all are more like the permissive BSD license than the GPL of the software world in that respect. Modifiability of the hardware design files is arguably less important than having source for a binary computer program, since even a PNG image of the schematic will enable you to understand the design, but if you want to have a lively community around your open hardware project, I think keeping designs easily modifiable is a must.
Given all of the above, I find that currently existing open hardware licenses still fall short of specifying good terms for redistribution of the documentation. While they do state the top most requirement for open hardware, that documentation for physical products must be shared, I think it's questionable whether that is actually enforceable in the way specified. From what I've seen, CERN license shows the most promise of addressing these issues and I will closely follow the happenings around it. In fact they already have a list of issues to be addressed with the next version that more or less covers all of my concerns, so I'm keeping my fingers crossed. In the mean time, I'll probably stick to GPL or Creative Commons for my projects involving hardware designs.
I've written before about the Funcube Dongle Pro. Apart from receiving satellite telemetry and amateur narrow-band FM, it can also be useful as a poor man's spectrum analyzer.
I'm not talking about doing a Fourier transform on the baseband data. At around 80 kHz that's not really useful if you want to see the big picture, for instance when measuring spectrum of a DVB-T transmitter with a channel width of around 6 MHz.
What you can do however is to emulate a swept-tuned spectrum analyzer. You sweep the Funcube's tuner through the range of frequencies using steps of several tens of kHz and pipe the baseband stream to a software power detector. Then you can plot the received signal strength against the frequency and get a fancy graph like this:
Of course, there's a catch. Funcube Dongle uses an audio codec that is connected to the user space through the ALSA interface. This means that after you change the tuner frequency the sampled data needs to go through plenty of big buffers before it gets to the detector.
So while the actual hardware might settle very quickly to the new frequency (I don't have data for the E4000 chip used in FCD, but other comparable tuners have channel change times in the order of milliseconds), software buffers limit the minimum sweep time.
For instance, this is what becomes of the picture above when you are not waiting enough time for the buffers to flush between (random, in this particular case) frequency hops.
It takes around 5 minutes to draw such a persistence plot with the bandwidth of 20 MHz, 50 kHz hops and 10 passes. So don't expect to get anywhere near the theoretical limit for sweep time versus resolution bandwidth.
You can get the Python source from the git repository below. It requires a recent GNU Radio installation, FCD source block and matplotlib.
$ git clone http://www.tablix.org/~avian/git/fcd_scanner.git
$ python fcd_scanner/fcd_scanner.py --help
Usage: fcd_scanner.py: [options]
-h, --help show this help message and exit
-d DEV, --device=DEV Use Alsa DEVICE for FCD access
-c FREQ, --center=FREQ
Center frequency in kHz
-s SPAN, --span=SPAN Scanning bandwidth in kHz
-t ST, --sweep-time=ST
Time for one frequency sweep in s
-g GAIN, --lna-gain=GAIN
LNA gain in dB (default 10)
-p STEP, --step=STEP Frequency step in kHz (default 25)
-n N, --pass=N Number of passes (default 1)
-o FILE, --output=FILE
Write measurements to FILE
-l, --line-plot Draw a real-time line plot
-e, --pers-plot Draw a real-time persistence plot
With a new job comes a new work laptop. Last week a shiny new Hewlett Packard EliteBook 8460p was waiting for me on my desk at Institute Jo?ef Stefan.
From the outside it's one of the prettier PC laptops and it's hard to say HP industrial designers haven't looked at Apple for inspiration. Aluminum case dotted with small white LEDs, chiclet keyboard, black bezel around the screen, a round HP logo on the back of the display. If you've seen a MacBook Pro you can probably see the similarity. However, good looks don't necessarily mean good design and HP made some weird decisions that I find hard to believe for a machine that costs more than 1600 ?.
First of all, I might say quality control seems a bit lacking. Left and right mouse buttons were out of level just enough to look sloppy. The "a" key keeps falling out. There's something wrong with the latches on the docking station since the cracking noise they make when I take the laptop off always makes me cringe. There are also some unfortunate design decisions: each time you pick up the laptop from the table you basically lift it up by the DVD drive door, which I fear is not good for its longevity. The Apple-like LEDs that shine through (laser-drilled?) holes in the aluminum are quite hard to see unless you bend down to the table and look directly at them.
What about the software side? Linux Laptop Wiki has some general information that gives a good overview of Linux support for this laptop's hardware.
Out of the box the disk had all 4 primary partitions already used. Sadly I had to retain the Windows installation, so I only deleted two partitions at the end of the drive that looked like they contained some HP restore and diagnostics software. Debian Squeeze installer then shrunk the remaining NTFS filesystem without further problems and Windows seemed to have survived the operation without any noticeable damage.
Getting the installer to boot however did take some experimenting: the laptop has 4 USB ports and will happily boot off a USB key in any of them. However for some reason all except one on the right with a thunderbolt icon are turned off after the boot. So unless the installation USB key was in that drive, the installer couldn't find the installation media. Funny, since after installation all USB ports work without problems.
To get the Radeon HD 6470M running in xorg I had to install a newer kernel from Squeeze backports (linux-image-2.6.39-bpo.2-amd64) and a newer Radeon driver (xserver-xorg-video-ati 1:6.14.2-1). The card also requires non-free firmware, which is shipped separately in the firmware-linux-nonfree package. Without it you will only get military-grade noise on the LCD panel. For 3D acceleration you also need libdrm-radeon1. With this setup everything seems to work fine except setting the display brightness (there's a kernel bug already reported): manual setup through the sysfs is ignored although the interface is there and I don't even know how to approach the ambient light sensor.
By the way, I've heard complaints about the low quality of the LCD. Mine looks great and makes my EeePC 901 look pretty dim in comparison.
Regarding other periphery, there is not much to say. Wi-Fi works with Intel drivers and requires firmware-linux-iwlwifi. Compared to the Thinkpad monstrosity the dock is only a bunch of hot-pluggable USB devices, which means no additional headaches there. Web-cam for some reason isn't detected automatically and you need to insert uvcvideo kernel module manually for it to work.
The power consumption hovers around 25 watts at idle, which is 10 watts more than when booted up in Windows. With that battery lasts a little bit more than 2 hours. This might have something to do with that kernel PCI Express power consumption bug. Setting power saving profiles on Radeon doesn't affect the battery drain which makes me suspect that power management doesn't work there either. Also, GNOME power manager crashes because of an empty second battery bay. I've written a patch against the version shipped in Debian Squeeze, but I'm not that optimistic it will get applied, since it's now sitting in the bug tracker right next to another patch I've made 3 years ago.
I should also mention the weird phenomenon where the computer freezes for several tens of seconds sometime after boot. There are no kernel messages logged about that and it doesn't seem to affect the running processes. I've seen such freezes on my old Thinkpad and on the EeePC, but they were always connected to IO contention. These freezes occur on an otherwise idle machine and the HDD light remains off, which makes me think they have some other cause. Also, a colleague with the exact same hardware and the latest Ubuntu version isn't seeing this, so I'm guessing it has something to do with my setup.
As you can see, after a week with this machine I have very mixed feelings about it. On one hand the aluminum case gives you a solid feeling. I love the dead simple screwdriver-less access to all the components inside, Linux support is pretty good and of course I don't have any complaints about the performance either. But seriously, shipping a laptop in this price range with a broken keyboard, dock and cheap-looking track pad?
Density maps are one of those things I need rarely enough that I forget how I did it last time but also often enough that spending 15 minutes searching on Google and browsing Octave manual seems unnecessary.
So, for my reference and yours, here's the simplest, fastest way I know of producing one on a vanilla GNU/Linux box. Density plot is something Gnuplot doesn't know how to do by itself. So you need to fire up Octave:
octave> [c,x,y] = hist2d(load("foo"), 150, 150); imagesc(x,y,c)
Here, foo is a tab-separated-values text file with two columns: X and Y coordinate of each point. The two numerical arguments to hist2d are the numbers of buckets for each axis. This produces a plot like this by default:
As with all Octave plots, you can write the current picture to a file using:
octave> print -dpng foo.png
On Debian you need octave-plot package installed for hist2d to work. See Octave manual for ways to further customize the look.
By the way, you might have noticed that the Y axis is up-side-down on the example plot above. That seems to be a bug in Octave when you only have negative values on the axis and I haven't yet figured out how to work around it.
Update: Running set(gca,'ydir','normal'); after the imagesc command restores the usual orientation of the axes for image plots.
My first somewhat serious look into the ARM architecture was at the CCC Camp, where I attempted to extract the secret signing keys via a buffer overflow exploit in the r0ket. Recently I started working on a bit more serious project that also involves copious amount of C code compiled for ARM and I thought I might give a quick overview of the little important details I learned in the last few weeks.
First of all, targeting ARM is tricky business. As the Wikipedia article nicely explains, you have multiple families (versions) of ARM cores. They are slightly incompatible, since instructions and features have been added and deprecated over time. Recent cores are divided between application, real-time and microcontroller profiles. You find application cores (say Cortex-A9) in iPads, phones and other gadgets that need a fancy CPU with cache, SMP and so on, while microcontrollers obviously omit most of that. The most popular microcontroller profile core right now seems to be Cortex-M3 - where Cortex is a code name for ARM version 7 and M3 stands for the third microcontroller profile. However, this is still a very broad term, since these cores get licensed by different vendors that put all kinds of their own proprietary periphery (RAM, non-volatile storage, timers, ...) around them. So stating the fact that you target Cortex-M3 is much less specific than, say, targeting x86 where you have de-facto standards for at least some basic components around the CPU. That's why the current Linux kernel lists 64 ARM machine types compared with practically just one for x86.
Speaking of Intel's x86, people like to say how it is full of weird legacy quirks, ARM has some of its own. One thing that caught me completely by surprise is the Thumb instruction set. It turns out the that original dream of a clean, simple design with a fixed instruction length didn't play out that well. So ARM today supports two different instruction sets, the old fixed-length one and a new variable-length "Thumb" set that mostly has a one-to-one relation to the old one. In fact, the CPU is capable of switching between instruction sets on function call boundary, so you can even mix both in a single binary.
To add to the confusion, there are two function call conventions, meaning you have to be careful when mixing binaries from different compilers. There's also a mess regarding hardware floating point instructions, but thankfully not many microcontrollers include those. Debian Wiki nicely documents some of the finer details concerning ARM binary compatibility.
All of these issues mean there's a similar state of chaos regarding cross-compiler toolchains. After what seems like many discussions Debian developers haven't yet come to a consensus on what kind flavor to include in the distribution. That means you can't simply apt-get install arm-gcc, like you can for AVR. Embedian provides some cross-compiler binaries, but those are targeted at compiling binaries for ARM port of Linux, not bare-bones systems for microcontrollers. You can build stripped-down binaries with such a compiler, but expect trouble since build systems for microcontroller software don't expect that. From what I've seen the most popular option by far appears to be CodeSourcery, a binary distribution of the GNU compiler toolchain. The large binary blob installer for Linux made my hair stand up. Luckily there's an alternative called summon-arm-toolchain which pieces together a rough equivalent after a couple of hours of downloading and compiling. If this particular one doesn't do the trick for you, there's a ton of slightly tweaked GitHub forks out there.
By the way, GCC has a weird naming policy for ARM-related targets. For instance, CodeSourcery contains arm-elf-gcc and arm-none-eabi-gcc (where arm-elf and arm-none-eabi refer to the --target configure option when compiling GCC). Both will in fact produce ELF object files and I'm not sure what the actual difference is. Some say that it's the ABI (function call convention) and in fact you can't link binaries from one with binaries from the other. However you can control both the ABI and the instruction set at run-time through the GCC's target machine options. Other than that, they seem to be freely interchangeable.
For uploading the software to the microcontroller's flash ROM I'm using the Olimex ARM-USB-OCD with the openocd, which also seems to be a popular option. It allows you to program the microcontroller through JTAG and is capable of interfacing with the GNU debugger. This means you can upload code, inspect memory and debug with single-stepping and break points from GDB's command line in much the same way you would a local process. There are some quirks to get it running though, namely OpenOCD 0.5.0 doesn't like recent versions of GDB (a version that works for sure is 7.0.1).
I think this kind of wraps it up. As always, a reference manual of the particular processor you use at the reach of a hand is a must. When things stubbornly refuse to work correctly it's good to fall back to a disassembled listing of your program (objdump -d) to check for any obvious linking snafus. And, when trust in all other parts of your toolchain fails and sharks a circling ever closer, a trusty LED is still sometimes an indispensable debugging equipment.
Google Reader was redesigned lately and I’ve been annoyed ever since. I had a dubious privilege of cutting and changing product features people loved in pursuit of <del>higher</del> different goals, so I try to be understanding when others do the same. I mostly found clumsy workarounds for removed features, but I do wish I could at least still trust that list of unread items actually has all of them. On a positive note I can save some electricity now because copious amounts of “helpfully” white whitespace illuminate this room brightly enough that you wouldn’t sit naked in-front of your computer even with lights turned off. That is if you are the sort of person who likes doing that but stops short of flashing your neighbors.
I still strongly dislike changes made, but I continue using Google Reader, because crack-heads don’t give up dope just because it was cut too thinly. I cherish my list of reading sources and like a gardener I have been cultivating it through years because I believe they make me better informed than I would be if I relied only on links shared by others. This may be elitist, but it is also true.
We are biased when choosing friends and communities we belong to. At the very least we enjoy our life more when surrounded with like minded people which is really a lighter shade of group think. We share to tell stories as much about what is shared as we do about who we are. Even when not self-censoring or trying to project an image we still are horribly bad at evaluating what influences us and how. Sharing everything, as this idiotic article suggests, doesn’t fix this <sup></sup>. It’s still content from same people only more of it.
Then there are social new sites, which are in essence news organizations with bigger editorial board. Their focus might not be the same and their world view less obvious (or not) as of traditional boards, but the end result really isn’t all that different. I don’t dread waking up in a world without Apple as I do in one without fish, but it is not articles about all things piscatorial that keep popping up on regular basis.
This doesn’t make socially filtered news useless, just limited and best suited for finding out what is popular at this moment. They should be a side dish not the whole diet. Getting some of your information diet from social sources may improve it, but relying only on them is just stupid. I wouldn’t fret so much if I didn’t worry about development trends — latest Reader changes being one example of them.
Reader had two methods of sharing. Obvious one was button Share which was adequately replaced with sharing to Google+ circles. The other one, which was the one I actually relied on, was to create public feeds for articles marked with certain tags. The most important difference is that in first case you grouped by intended audience and in second by actual content <sup></sup>. Instead of following me, you could just follow my selection on particular topic which in most cases would probably be closer to what you want.
By itself stripping a feature like that doesn’t mean much. However when I also judge other changes such as aforementioned abundance of whitespace, removal of “Note to reader” and new reading unfriendly theme, it’s easy to come to conclusion that all roads now lead to Google+. Reader’s role is at best to feed its younger brother with stuff to socialize around.
It would be wrong to attribute these changes simply to competition with Facebook since they are a part of a larger trend to social curation. I find this trend just a normal consequence of a web ecosystem where most product innovation happens in VC funded startups. How companies were funded was always a part of their DNA and economics of today’s VC environment for companies that will probably be acquired at some point (and let’s be honest, who REALLY believes most news experiments won’t be?) almost demands a quick and high growth. It’s not impossible to achieve this with sources-based product, but it’s certainly harder and less obvious than creating another twist on social news.
If my first and main point was a personal appeal to seek insight also in your own, personally picked sources, then my second is to question if shaping web and world with it should really be left only to industry and academia. It really doesn’t have to be this way.
- Browser’s history is a great place to see just how much of what we visit is unimportant, unrepresentative and often unsharable. A small friction necessary for a deliberate act of sharing is actually a feature that gives at least a modicum of reflection on content’s share-worthiness.?
- Feeds enable that and are one of crucial building blocks for what I started to call social software for introverts. It is software which is better when used by many, but is good even when you are its only user. Instapaper would be a perfect example of such an application and Facebook is a counter-example.?
I was going low on my personal business cards, so I ordered a new batch from Moo. Besides adding some new circuit-porn designs for the back side I also added the fingerprint of my GPG key next to my e-mail address on the front. When I flashed a bunch of cards in Kiberpipa the other day, Brodul pointed me to this policy for business cards of KDE developers that says that GPG fingerprints on the cards give an unprofessional impression.
Interesting. After a bit of browsing the opinion on this matter in the open source world seems to be divided. I remember that cards from Red Hat were the first that I saw with the fingerprint displayed. However there's also this interesting complaint that it has only ever been used to mock the owner of the card. They also seem to be popular with Debian developers and people were seen complaining that it has been dropped from Ubuntu's design.
I actually thought for a while whether to add it or not before submitting the design. For instance, it makes the little Moo card much more cluttered and I liked the previous simplicity of three lines with an e-mail and web address. On the other hand I like to promote secure communication and have been consistently cryptographically signing my mail for many years now. So you can also take it as a statement I guess.
Part of why I think sharing your public key in this way is a good idea is because in my opinion trusting keys based on traditional key-signing parties was a big blunder. There you had 10 random people, signing keys based on some government-issued identification. What this does is only to transfer your trust in government ID to trust in the key. Let's not even go to the problem of me not knowing what the official IDs look like in most countries or the fact that I don't necessarily trust your government. You could score a hundred signatures like that and that doesn't help me in knowing whether the key owner is the person behind the face I met here-and-here.
So I am now willing to sign a public key after I spent some time with the person that personally gave me that key. Enough time to convince me that he is who he claims to be. And I encourage other people to do the same when signing my key (a ritual, by the way, a certain social scientist once described as fascinating).
Yes, this method is vulnerable to a social engineering attack. But so is signing a key based on someone's driver's license and I am sure social interaction is harder to fake than a foreign-looking personal identification card. By getting a fingerprinted business card you get a hard-to-undetectably-change hard-copy of the fingerprint, for which you can be reasonably sure that it belongs to the person that gave it to you.
I never intended to, but somehow missed last few elections because they were on dates when we were travelling. It looks like this time will be different so we have spent quite some time so far thinking about whom to give our votes. Like most people I know I find most parties in Slovenia distasteful and while I am not enamoured by anyone, I think I will be able to cast a vote. But I do wish our system was slightly different.
Here’s what I would change if I could:
- Introduce a preferential vote (only first and second choice). It kind of sucks that you have to decide between a party that you would like to vote for and a party that actually has a chance of getting in parliament according to polls. I know I am not alone with this dilemma and I often wonder who people would pick if they were not afraid of “wasting” their vote.
- Compulsory voting (but only with point 3). Everyone who can vote really should unless they can provide a good reason why they can’t. Voting every few years really should be the bare minimum of citizen’s civic engagement.
- Add an explicit choice “Nobody from this list”. Voting for least awful can still be unpalatable. Right now we also can’t distinguish between people who didn’t want to vote for anyone on the ballot and those who couldn’t be bothered. It’s easy to see low turnouts as a failure of democratic process when it has less to do with process and more with available choices. “Nobody” votes should be counted when calculating parliamentary threshold since they are votes against all parties, and ignored when calculating how many seats parties in parliament get.
Regretfully these changes wouldn’t benefit parties in parliament (whichever they are or will be) so there is almost a zero chance that above changes will happen. Unless of course there is enough popular support to have a referendum.