I'm probably the last person on the internet to run into this, but I love it.
Click the link for the full screen version, even better with nice speakers.
I don't really have a post here. I just wanted an excuse to use Panic's fancy image zoomer. If you're reading this in the feed it won't work, of course.
Randy memetagged me. Basically, you enumerate all the websites that you use; daily, weekly and monthly.
For some reason, del.icio.us just (intentionally) broke a major piece of their functionality. They no longer file tag subscriptions and user subscriptions into the same bucket, they’ve created a new bin called “your network” (hatehatehate stupid Web 2.0-ism) to hold the user subscriptions. The reason I use del.icio.us in the first place is so that I have less places to check for new stuff. That single inbox feed was a feature and you just broke it.
It’s not often you come across a website that completely changes the way you use the internet. del.icio.us was one of those sites for me. It made linkblogging effortless, and finding new links became just as easy. Though there have certainly been some growing pains as the userbase has grown to half a million(!) users, I want to raise a toast to the service on this occasion.
Anyone know anything about them? Is it just YABSE, or is there some secret sauce? They’re sending a lot of traffic my way, so I’m not complaining.
Hola Lazyweb. Since adding a web browser to my Sony PSP, I’ve become obsessed with the idea of instant handheld access to reference sources. Google works reasonably well in the PSP browser by default, and the Wikipedia’s simple layout works ok on short entries, though longer ones tend to cause out-of-memory errors in the browser. Bloglines Mobile works more or less perfectly.
I could kind-of use the Internet Movie Database, though once again I had problems with really slow rendering speed and out of memory errors. I’m happy to say I found a solution, though. There’s an open source Python library that interfaces with the IMDB called, logically enough, IMDbPY. There’s a simple CGI frontend that can talk to this library. After building the library (I did it via DarwinPorts) I installed the gateway on one of my local boxes, and now I can search by tile or performer and get lightweight results pages that perform fine in the PSP browser. It’s perfect for sitting on the couch and watching TV, being able to look up J. Random Actor and see what other things they’ve performed in.
I know the business models of a lot of reference sites make doing this sort of thing a challenge for them, but there are so many of these sorts of reference sites that potentially become even more valuable when you’re out in “the real world” rather than sitting in front of a computer. I know that in the “perfect future” when every site has a well specified API and uses smart stylesheets that gracefully degrade, we won’t have to worry so much about these things, but in 2005 it’s still a challenge.
I mentioned this on the linkblog, but it’s cool enough to merit a separate link in a full entry. Gvisit gives you a visible look, via the Google Maps API, at the (approximate) locations your visitors are coming from. Here’s mine. So far I’m only seeing hits from North America and Western Europe, though. I wonder which IP-to-geographic-location tool it uses.
Fun to watch, though — pure geoporn.
And no, I don't exactly see the point, either :)
(or I See Something You Certainly Won’t Get)
I’m not really concerned with the bulk of this post, really, it’s mostly concerned with things I don’t care about. I’m not the kind of guy, at any level, who’s looking for an it’s-not-just-a-floor-wax, it’s-a-dessert-topping-too blogging client/aggregator/server whatever-the-fuh. Unix, man, simple, focused tools that do one thing (well) and aren’t selfish with their data.
I use a semi-fancy text editor as a blogging client. I could use a fancier one, or a simpler one. I serve my pages with an almost pathologically simple publishing system. I think the real reason I favor this way of working is that I’m rarely surprised by my tools. When you serve content, surprise is bad.
The following bullet point, though, caught my attention:
3. WYSIWYG copy and paste.
It totally amazes me that both Moveable Type and TypePad don’t have a way for Windows users to go to a blog post - copy it - and paste it.
The web is not WYSIWYG, not from the top of it’s addled little head nor to the soles of its duct-taped together Chuck Taylor’s. When you (or even worse, your tools) pretend that it is you set yourself up for certain disappointment.
It would be nice for simple authoring if the web was WYSIWYG, and I’d like some ice cream, please, sir. And a pony. Definitely a pony.
Early on in the web’s history, it was the norm to conflate presentation and content. Of course, this never really worked, which became screamingly obvious once the second and succeeding browsing useragents appeared. Suitably chastened, later formal iterations of HTML and related tech made this distinction more explicit. CSS was developed to give authors (some) control over presentation, but the standards were always painfully plain on some very important points: presentation and content are separate, and presentation can and will vary widely in differing implementations in various contexts (e.g. desktop vs. mobile, different useragents, etc.) and that factors such as accessibility, iñtërnâtiônàlizætiøn, and a thousand other details have to be taken into account.
The sad fact is that a huge number of pages (perhaps even the majority) on the real, wild, web sport at least one of the following problems:
Any time you copy and paste from someone else’s broken page, you have a high likelihood of importing whatever brokenness existed in the source into your own. There are a few nuts out there like myself who like to poke the bear with a sharp stick by playing cute with things like curly quotes and uncommon characters, but I do it with eyes wide open, knowing that I need to tread carefully, and that I still stand a good chance of being mauled at any moment.
It’s true that the wide world of humans out there needs tools that make this stuff easier. I will be just as happy as everyone else when those tools arrive.
Dave Hyatt’s been talking about the extensions that the Webcore team at Apple has made to support some of the new tech in next year’s OS X release, Tiger. This caused a bit of a stir, as a few people saw in this the potential of a return to the bad old days (circa 1996-7) of browser vendors adding extensions to HTML willy-nilly for competitive advantage.
The Webcore team is apparently listening to this feedback and are exploring ways of adding these extensions in a non-disruptive fashion. Towards the end of one of his posts, Hyatt makes a statement that fascinates me:
Going forward, I’m curious what the reaction will be as WHAT-WG works to further extend HTML. Assuming that the W3C has really decreed HTML4 to be obsolete, what happens when a proposal is made by multiple browser vendors to extend it? If the W3C rejects it, should the browser vendors be forced to keep their content namespaced forever? I guess we’ll cross that bridge when we come to it.
It’s pretty obvious what’s happened here. The WHAT-WG has basically been forced to fork HTML, as the W3C has moved on to other horizons and isn’t really doing anything with the 97 percent of the web that’s already here. One hopes that the W3C will be open to a merge of the forks down the road, but, if they don’t, I think it’s obvious what happens next.
I don't have a horse in this race (I'm not a coder, nor a bidnethman), I'm just an interested observer. At this point, assuming the Echo folks wind up with an implementable spec (something I entirely expect to happen, given the technical caliber of the people involved), the outcome, to me is clear. The installed base insures that all current weblog and aggregator vendors will continue to support RSS (most likely the 0.91/2.0[x] strain) for the forseeable future. Every developer of note in this field has already pledged support for Echo, and (paranoid conspiracy theories aside, and we know who's spreading that FUD), an open format hashed out via a transparent process is going to appeal to a lot of people. I'm going to provide feeds in both formats once this is all settled, and I imagine a lot of pragmatic folks will do the same thing. No big deal.
Here's a fascinating (for certain, functionally pathetic definitions of fascination common to geeks like me) look inside Google's query serving architecture. It takes a look at how they take advantage of parallelism (at the network, hardware, and processor levels) inherent in serving search engine queries and explain their practical choices based on hardware costs, power consumption, and the savings possible when using cheap hardware with fault-tolerant software. (via Aaron Swartz's Google weblog)
I know I shouldn't complain. Sourceforge provides valuable services to the geek public, for free, but their infrastructure is so broken! Their public CVS servers are almost impossible to access -- sometimes it seems like it's a contest to see how many times you can see the "cvs [login aborted]: recv() from server cvs.sourceforge.net: Connection reset by peer" message in a single day. I use a number of very actively updated programs, so I like to stay in sync with the development (CVS) versions of quite a few of them, and, for any Sourceforge hosted project, this above error message (or something very much like it) is a (painfully) regular sight.
If the instability of SF's infrastructure only affected the 0.01% of people who "need" to keep source trees in sync, that would be one thing. I'd still whine about it, but it certainly isn't something that would affect a measurable portion of even the general computing public. The problem is that, increasingly, even ordinary software projects with a general end-user audience are hosted with Sourceforge, and their file mirroring system is very, very broken. Sourceforge mirrors their content with several organizations, geographically spread around the computing world. This is good, because it distributes the bandwidth load across several large "pipes", rather than requiring Sourceforge to bear the expense of many ungodly huge net connections on its own. For this reason, they don't provide direct links to binaries for hosted projects. Instead, files are distributed via HTTP redirect links to the various mirror sites. Unfortunately, their mirroring process is unreliable and it will often serve up redirect links to mirror sites that have not received the files yet. Many times you'll find yourself presented with a list of 10 mirrors for a new file release, only to get 404 link errors on the first 6 or 7 or 8 links you click. What is the logical end-user response to this? "These open-source projects are always so unpolished. They can't even provide working download links for their software."
But, I know, it's free, and if I can't provide something better I should just shut up. But damn.
What the hell happened to Versiontracker? The new design is awful. It's clearly designed to force searchers into more pageviews (and hence, more adviews) at the expense of usability. All the more reason to use Ben Moore's MacUpdate Sherlock channel.
The recent Scheißstürm (is that a real German slang word or am I just fronting? no idea...) illustrates that, for purposes of establishing identity in a long and contentious thread, trackbacks are more useful than comment entries. And don't get me started on those plaintext excerpts again. I suppose trackbacking preëmptively is a good habit to get into, because if one manually sends a ping, at least you get to control what gets excerpted.
If you fail to plan, plan to fail.