Monday, December 19, 2005

The State of Digital Cameras

Most of the photos I shoot these days are digital (using a Canon Powershot G5). The immediacy of digital and the ease with which I can photos into my computer and up on the web are big wins for me—at least for many purposes. I've also run into a number of common limitations and annoyances with the G5 that are common to the cameras of its class (not really a "point and shoot" given that it's higher-end than that but I'm not sure what else to call it). But that's a topic for another post. For today, I thought I'd throw out some thoughts on the state of digital SLRs.

Last week, I had the opportunity to briefly play with a Canon EOS 5D at B&H Photo in New York. This is Canon's newest "prosumer" or advanced amateur digital SLR body and it's significant for two reasons: its sensor is full frame and it's (getting close to) reasonably-priced.

For me and many others who have a stable of Canon EOS lenses for their film SLR backs, the full frame sensor is significant. It means that a 20mm wide angle lens is still a 20mm lens. With smaller sensors, the focal length is effectively multiplied. Thus, that 20mm lens now acts like a not-so-wide-angle 32mm lens (20 * 1.6). This isn't such a big deal (in fact it may not be much of a negative at all) if you mostly shoot telephotos, but if you use wide angles a lot it is. Of course, a larger sensor can also mean more pixels (all other things being equal), but with pixel counts on even smaller sensors (such as on the EOS 20D) exceeding 8 megapixels, this is a secondary consideration for me. (The EOS 5D is 12 megapixels.)

In other respects too, the EOS 5D is more comparable to film bodies in the same general category of Canon's lineup. This is where touch and feel come in. It's solid and beefy and has a large clear viewfinder. Contrast that with Canon's sub-$1K Digital Rebel XT which has a pretty decent set of specs (except for not being full frame), but is almost too small for me to hold comfortably and the viewfinder looks tiny. Compared to my EOS 3 film back, the 5D's focusing system isn't quite as sophisticated. There are doubtless other differences as well. (Some discussions I've read suggest that the weather seals may not be as complete on the 5D as the 3.) However, for the most part, the two bodies seem quite equivalent—subject to the inevitable differences between film and digital.

What of the price?

This is where the EOS 5D makes the most significant advances—while still falling a bit short. A full-frame sensor was previously avaiable from Canon, but only for about $8K with the EOS1Ds Mark II. While everything's relative I suppose, this price clearly puts it out of the reach of all but the most determined (or wealthy) amatuer. At a bit over $3K, the EOS 5D is in a whole different price category, albeit still a relatively high one. It's still more than twice the price of an EOS 3. (Today, about $3,300 vs. $1,250—although I suspect that we'll start seeing deals and rebates for the 5D once it's not quite so new.)

Overall conclusion? Digital SLRs are approaching function/price parity with film bodies but they're not there quite yet. If we think of it as needing another Moore's Law doubling, that would put it at 18 to 24 months for parity—which is probably about right for another generation of camera body.

See detailed review of EOS 5D here.

Wednesday, November 02, 2005

Another Web 2.0 Definition

I don't really care for the "Web 2.0" nomenclature—although I suppose that most alternative neologisms (such as Read-Write Web) seem equally and painfully trendy and, thereby, hugely grating. Tim Bray argues against the very substance of the term. I'm not sure I go that far. It's equally simplistic to view the mainstream Internet explosion through the lens of the Web, but it also seems both accurate enough and useful as a way of distilling enormously complex trends. As I wrote about earlier, Kevin Kelly's article that anchors the Internet as most people think about it to the Netscape IPO seems "true."

But if Web 2.0 is some generally common thing, even if its borders are fuzzy, what is that? Tim O'Reilly's Web2MemeMap is one such. (via Web As Platform from BEYONDVC). Let me suggest something simpler and more lighthearted for now. Try this on for size:

It's that collection of technologies and terms that 90 percent of the peopulation over the age of 20 has never heard of.

Wednesday, October 19, 2005

Wikipedia mea culpa

I've picked at Wikipedia quality issues a few times of late, such as in this post. Andrew Orlowski has a nice piece in The Register on Wikipedia quality in which he notes
Encouraging signs from the Wikipedia project, where co-founder and überpedian Jimmy Wales has acknowledged there are real quality problems with the online work.

Criticism of the project from within the inner sanctum has been very rare so far, although fellow co-founder Larry Sanger, who is no longer associated with the project, pleaded with the management to improve its content by befriending, and not alienating, established sources of expertise. (i.e., people who know what they're talking about.)

Meanwhile, over at Smalltalk Tidbits, Industry Rants, James Robertson argues that it's really hard to identify true expertise, especially where topics are controversial. For example, historians still argue about the origins of World War I much less recent U.S. presidential elections.

I can't argue with that. Nonetheless, many of Wikipedia's flaws that I've seen aren't about divining hard-to-understand causes and effects but about pretty basic matters of fact--and a lot of really bad writing.

Tuesday, October 18, 2005

Interruptions

"Continuous Partial Attention," the state of constant interruption that characterizes many of our lives get written about a fair bit in one form or another. See here, for example. Good article on the phenomenon in this New York Times article (with data. Perish the thought!)
Yet while interruptions are annoying, Mark's study also revealed their flip side: they are often crucial to office work. Sure, the high-tech workers grumbled and moaned about disruptions, and they all claimed that they preferred to work in long, luxurious stretches. But they grudgingly admitted that many of their daily distractions were essential to their jobs. When someone forwards you an urgent e-mail message, it's often something you really do need to see; if a cellphone call breaks through while you're desperately trying to solve a problem, it might be the call that saves your hide. In the language of computer sociology, our jobs today are "interrupt driven." Distractions are not just a plague on our work - sometimes they are our work. To be cut off from other workers is to be cut off from everything.

For a small cadre of computer engineers and academics, this realization has begun to raise an enticing possibility: perhaps we can find an ideal middle ground. If high-tech work distractions are inevitable, then maybe we can re-engineer them so we receive all of their benefits but few of their downsides. Is there such a thing as a perfect interruption?

Via this Joel on Software post

Good questions about Video iPods

I'm always a bit hesitant to comment on the usage models underpinning a lot of electronic gadgets when it's pretty clear that I'm not the target demographic. I own a couple of different iPod flavors (a Shuffle and a standard pre-photo 40GB one), but don't have a whole lot of interest in showing off photos on the little screen—much less video. However, I don't typically show snapshots around either or take a lot of video with my digital camera, so I've sort of shrugged and figured I'm just not the audience.

That may be true, but this article from MSNBC about the video iPod mirrors a lot of my thoughts. For example:
You can listen to your iPod at work, at home, at the gym, in the car. Basically, that means everywhere. But watching your iPod might not be as convenient. It will be tough to watch at work or while jogging. And in the car? Passengers, yes. Drivers, no.

Good questions.

[UPDATE: For the opposing view, here's a piece from Information Week.]

Thursday, September 29, 2005

The Post-Modern Equivalent of Brass Candlesticks

In this day and age, not many folks still have brass candlesticks to polish. But we apparently have a replacement activity: Apple nano polishing

via: Make. .

Amen

The highlight of an interview with Jeff Jones of IBM Analyst Relations:
What are things you would change?

I would rewrite PowerPoint to allow no more than 10 charts in any presentation. I would rewrite Notes' calendar feature to disallow the creation of meeting invitations that lack at least five sentences of explanation as to the purpose of the meeting. I would also remove the recurring meetings feature of Notes' calendar.

I don't mind the recurring feature so much. (Although when I worked for a vendor, I did accumulate a lot of pointless weekly meetings over time so I know where he's coming from.) But "no more than 10 charts." Yes!

via helzerman.com

Knowing Cultural Context

Tokyo-based blogger Sean Kinsell eviscerates this WaPost story purporting to find a new girlyness among Japanese men. For a primer on how important local knowledge is to understanding any supposed trend, this post is hard to beat.

Worthwhile reading. I don't have any good links handy but I can't count the number of times that I've read something which drew broad brush conclusions about behaviors or trends in a city or place. (Of course, local reporters do this too.)

via Virginia Postrel

Friday, September 23, 2005

Opting In and Opting Out

We're starting to hear the "Opt Out" defense a lot in some recent copyright cases, first with the Internet Library (which I discussed here, here, and here) and now with Google's tussle with the Author's Guild. As Tim at O'Reilly Radar comments:
Google's opt-out position is exactly the right one. If we were to wait for publishers to opt in, only current, in print works would get into the index.

I think he's almost certainly right. Because inertia and fear of losing control of their IP, most publishers would probably think it simpler and safer to do nothing. However, as I commented earlier in the Internet Library situation, it hard to see the legal significance of providing an opt-out mechanism. The sort of archiving that Google is doing may or may not fall under fair use (I don't really have an opinion on that, but let publishers opt-out is just a courtesy. Here's what an expert has to say (The Patry Copyright Blog--a great, if very detailed and technical source, for copyright discussions):
The legal issue remains the same, however: whether copying of an entire work without authorization is an infringement where the ultimate user is able to see only a few sentences of the original. Since fair use is an unconsented to use, the fact that publishers object doesn't matter, regardless of the chutzpadik way Google may have handled the issue (The Second Circuit is divided on whether bad faith is a fair use factor). And whether Google is actually an advertising behemoth that doesn't want its own service to be used to investigate itself, whether it is, therefore, a false great white knight in the culture wars (if there are any) shouldn't matter either.

Tuesday, September 20, 2005

More Wikipedia Weakness

It's often tempting to give Wikipedia a pass. After all, many of its most egregious errors get fixed over time--at least if the topic isn't controversial and the article has had enough time to "settle down" from breaking news or changing facts. But, as I've commented before here, here, and here, when Wikipedia is bad, it can be pretty bad.

Case in point, the other day I happened to run across an article on Data General AViiON servers, a topic with which I have a more than passing acquaintance. I was the product manager for the first AViiONs and handled marketing for many products of successive generations. Now this is the type of article on which one would probably be inclined to trust Wikipedia to get things more or less right. After all, it's an uncontroversial technical topic of the dead (but not too distant) past.

Don't trust those instincts. Apparently the article is also sufficiently off the beaten path that it hasn't had a chance to benefit from the Wikipedia "community" because it's rife with howling factual errors. It has SCO writing DG/UX (Data General's flavor of Unix) for example, contains basic misconceptions about Non-Uniform Memory Access (NUMA) architectures (an important piece of the AViiON line), and its storyline about DG's historical competition with DEC is pretty far off-base. But my intent here isn't to belabor the details--many will probably be fixed eventually, perhaps by me. And, I've seen trade press articles almost as bad. But this serves as yet another cautionary tale. Even where Wikipedia "should" be good, it often is not. Exercise appropriate caution.

Monday, September 19, 2005

DVD Wars

I'm not an expert on this stuff, but this writeup from Engadget seems to do a nice job of cutting to the basics of the Blu-ray vs, HD DVD fight (with a nice precis of DVD history into the bargain). Bottom line?
Still with us? No? Blu-ray discs are more expensive, but hold more data; there, that's all.

So now that you know why Blu-ray discs cost more and why Sony/Philips and Toshiba are all harshing on one another so much, we can get to the really important stuff: the numbers, and who’s supporting who.

Friday, September 16, 2005

Disk To Flash

Yes, it's been a while since I've posted. My bad. Lots of travel in various permutations of business and pleasure and the organizational wreckage that comes from such. (Intel Developer Forum in San Francisco, hiking around Mt. Shasta and Point Reyes, Sun's "Galaxy" launch in NY, etc.) But I'm more or less dug out now.

To me, one of the more interesting aspects of the iPod nano announcement was that it's replacing the mini in Apple's product line--and thereby, in a single stroke, shifting a huge chunk of portable MP3 player volumes from disk to flash. (The iPod mini uses a hard disk while the nano uses flash memory.) Does this presage even more shifts from spinning steel to silicon?

The short answer is probably yes. The longer one is a bit more complicated.

Certainly it's hard to see the shift as anything but a complete one for many, many years to come. This post by Illuminata colleague Tom Deane gives some of the reasons. Price per bit is one big issue. Limited read/write cycles are another. Just as tape continues to exist alongside disks, disks will continue to exist alongside flash.

However, flash is well on its way to eclipsing disk for most really portable applications. Advantages like size, ruggedness, and lower power trump somewhat higher per-bit costs for the most part. The nano's a classic case study of the tradeoffs. Apple's taken the somewhat daring move of replacing its wildly popular mini with a device that's more expensive per capacity today. It's betting that the svelteness and longer batter life of the nano make up for giving up storage. One can reasonably argue with the timing; I would probably disagree, but it's a plausible argument that Apple could have waited until some price crossover point was reached. But there can be little dispute that flash continues to take over disk territory in these handheld (and smaller) devices.

That's because Moore's Law is outpacing usage models. Human hearing isn't getting better. We're not better able to watch movies on a small LCD screen. Useful digital photo resolution ahsn't plateaued yet but it's probably getting close. In other words, we're rapidly moving towards points where additional capacity in many types of devices becomes less and less important relative to other characteristics such as lightness of weight. As flash memories rapidly head into the multi-GB range, the size and number of MP3 files aren't increasing apace.

The replacement of disk by flash is one implication of these intersecting curves. Another is that silicon technology will increasingly not be a factor in how many functions can be crammed into a single device. Which isn't to say that everything will be. There are certainly user interface issues (which Apple can probably solve as well as anyone) as well as more subtle and less rationalist issues of style and brand.

Thursday, August 18, 2005

Scheduling Problems

Fellow analyst Stephen O'Grady asks "Why is Scheduling Still So Damn Hard?"


Think about how you schedule meetings with folks outside your own calendar system:
* Step 1: Determine your own availability
* Step 2: Communicate that availability to an external party; typically means cut and pasting or manually writing some openings into an email
* Step 3: If you're lucky, some of these work, and you receive a reply which requires you to create a new calendar entry
* Step 4: If you weren't lucky in Step 3, the available slots didn't work, and the external party has proposed some alternatives so you're back to Step 1. Rinse, lather, repeat.

Certainly one root of the problem is the lack of appropriate protocols and mechanisms to selectively open up our calendars beyond the firewall. Yet just another example of the horrible state of collaboration, Microsoft Office 2003 dinosaur ads notwithstanding. Yes, it would be nice to do the same sort of group scheduling we can do with Outlook/Exchange with folks at other companies--or indeed could do with proprietary office automation products like Data General's CEO, fifteen years ago. Yes, that would be a good start. But we should also set our sights higher because the Outlook way of doing things isn't all that scalable either. The ultimate goal should be to have the system able to make intelligent decisions for us rather than just present us with a bunch of out-of-context data.

The real problem here is that computers are so bloody literal. They can schedule a block of time around already scheduled blocks of time, but that's about it. But scheduling is more complex than that. This isn't new. Consider the following from The Digital Deli, a marvelous look at PC culture circa 1984.
Next [in the list of computer applications to avoid] we come to the computerized electronic calendar. It doesn't let you make dates; you have to make "events." Can you imagine saying, "We fell in love on our first event"?

Worse, it suffers from that picky literal-mindedness of machine-think. For me, as for most folks, time unravels in a drinks-with-Chris-late-next-week sort of way. Electronic calendars are not that loose. To them, "late next week" means nothing. "Noonish" means nothing. They don't know about "happy hours," and they've never even met Chris. But "07-08-83, 6:00 P" they understand. No wonder we don't get along.

The electronic calendar also wants me to tell it just how long the "event" will last. The program divides the day into fifteen-minute blocks and has to know how many of those will be filled by my event with Chris. Now, I usually know roughly (within a half-hour, say) how things will go, but one has to be flexible on this sort of thing. Maybe an old mutual friend stops by the table, or we suddenly decide to go see a movie. Or there's a full moon out and it's a warm night ... You get the idea. Well, in electronic date-books, it's not enough to write "Chris" at 5:00. They want to know where you'll be at 5:15, 5:30, 5:45, 6:00. Sometimes I get the feeling this program was designed by somebody's mother.

I want a way to describe to the computer that I'd prefer to not have three meetings in a row, that I prefer not to have a call at 5pm but I can if there's no other choice, that I've got three hours blocked off for writing but I can take a call during that slot if it's important enough, and so forth. I'm not so naive to believe that we can easily get a computer to actually grok all our preferences and options, but we should at least have such considerations in mind as we tackle the nearer term mechanical problems.

Losing Money on Volume

A few weeks ago I suggested that
at least relative to the Harry Potter books of the world, there's a bit less discounting competition on the Long Tail which should at least partially offset higher costs associated with lower volumes. With purely digital goods, the Long Tail comes even closer to pure gravy.

It turns out that for the blockbuster hits, discounting can hit profitability hard. That's because a "big box" reseller like Best Buy uses hot new DVD releases as loss leaders to bring customers into the store. It may, in fact, sell new DVDs below cost for a time. This practice presumably works out for Best Buy (otherwise one supposes they wouldn't do it) because enough people who come into Best Buy to buy the latest Star Wars DVD also buy another non-sale DVD, a CD, or an ink cartridge. However, in the process, Best Buy pretty much destroys (or at least greatly reduces) the profitability of new blockbuster hits for everyone else in the process. Stores, both online and bricks-and-mortar, that drag less other business along in the wake of the bestseller can end up making relatively little money on their highest volume titles as a result. By contrast, smaller titles may have somewhat higher stocking and inventory costs, but tend not to be exposed to loss leader discount pressures from the big retailers.

Wednesday, August 17, 2005

Code From Books

Dan Bricklin's latest Software Licensing podcast is with Tim O'Reilly, founder and CEO of O'Reilly media. A chunk of it deals with a topic that was once very relevant to me That's the question of what's allowable use for the code in a programming book.
Tim O'Reilly's short answer on his Web site is this:
You can use and redistribute example code from our books for any non-commercial purpose (and most commercial purposes) as long as you acknowledge their source and authorship. The source of the code should be noted in any documentation as well as in the program code itself (as a comment).

He then goes into a bit more detail. The bottom line is that using the code as part of a larger project is generally OK so long as proper credit is given. Competitive uses-for example, publishing a CD with the code snippets or including them in another book are not. It's got some elements of the original BSD license (with advertising clause) with the important caveat that O'Reilly will frown upon uses that directly undercut the market value of its own (or the author's) products or services.

O'Reilly's position seems quite commonsensical. It also somewhat codifies what's long been common practice. My personal interest is that I once developed and sold a DOS file manager, Directory Freedom, which (to quote the docs):
originally grew out of a variety of programs which owe their "look and feel" to Michael Mefford's DR and CO utilities in PC Magazine Volume 6, #17 and #21. DF was most directly adapted from Peter Esherick's DC (Directory Control) version1.05B. Peter helped get DF started by making the source code for DC available to me and has also shared some fixes which he has made in subsequent revisions of his program.

This type of reuse and adaptation was fairly common on a small scale in the days before today's Open Source hit in a big way. (I'm talking roughly the mid-eighties to mid-nineties here.) Code in magazines like PC Magazine, Dr. Dobbs, and PC Techniques was certainly copyrighted, but it basically existed to sell the magazines, as opposed to being commercial software in its own right. In practice, it was generally assumed by just about everyone that most uses of the code were proper, even if good manners suggested giving credit where credit is due.

One interesting aspect is that this code, while not licensed as Open Source, can in practice be used more flexibly than true Open Source licensed under a "viral" license like the GPL which requires that derivative works be likelwise licensed under the GPL. In fact, my Directory Freedom could not have been a shareware product had the PC Magazine utilities been explicitly under the GPL rather than the implicit loosey-goosey de facto BSDish terms under which they were actually published and used.

Tuesday, August 16, 2005

NPR On Demand?

Podcasting and related technologies have been called the "End of Radio." I haven't been buying; at least for the most part. Given that podcasts have to be consumed in real time, a person can consume far less audio content than is the case with inherently skimmable written blogs. As a result, I've previously argued that:
as professional broadcasters like the BBC start putting content on the air, (e.g. "In Our Time") many--probably most--people will largely devote their limited audio-listen minutes to professionally-produced broadcasts. Call this podcasting if you like, but it's really just on demand radio as you can record more crudely today with software like Replay Radio.

Conspicuously absent from such on demand radio has been NPR which, like the BBC, would seem to have many programs tailor-made for the purpose-both highly topical and less so. (I'd argue that the current state of the technology in which several manual steps are needed to sync a program, for most people programs that can be listened to a week or a month after broadcast are preferable.)

Well, it looks as if NPR may not be totally clueless after all. It's apparently decided not to renew its contract with Audible, the maker of lame, proprietary audiobooks and the like. (In all fairness, Audible was long about the only game in town.)
As we formulate a more comprehensive strategy, we chose not to renew our agreement with Audible when it recently expired. We are now developing a new strategy for making NPR content downloadable and portable. Once the plan is finalized, we will announce it publicly.

via O'Reilly Radar

Dan Bricklin on Software patents

Dan Bricklin weighs in on software patents in this post.

Dan has some history here. As the creator of VisiCalc, he's previously written about why he didn't patent that software. His latest rejoinder is in reaction to:
Russ Krojec's blog entry "What if VisiCalc was Patented?" Russ argues that since VisiCalc wasn't patented competitors found it "...safer to copy the currently winning formula and avoid having to innovate. In this case, the lack of patents brought innovation to a standstill and we are all running spreadsheet programs that still operate like 25 year old software."

Dan correctly points out that Russ' position just doesn't square with historical facts--and demonstrates thereby the value of having an understanding of history. Back in the DOS days, I clearly remember any number of programs that were perhaps inspired by the spreadsheet metaphor but deliberately took different paths. There's some discussion of different approaches and programs here. A number of the alternatives were more explicitly multi-dimensional or iterative or capable of solving more flexibly-designed equations than conventional spreadsheets, but none were particular successes. Products that were very direct VisiCalc successors (basically Lotus 1-2-3 and then Excel) ruled instead, but it wasn't because there weren't alternatives. How come?

One reason that I'm pretty confident in giving is that, as the PC became more widely used outside of hobbyists and specialists, this forced a certain regularization of applications, for lack of a better term. Suddenly, the computer unsavvy needed to use these apps. A whole ecosystem of specialized training classes, books, and support systems sprung up. This tended to marginalize mainstream software that broke with established models. It was hard enough to teach people new command codes (which were highly irregular in the days before Windows), much less radically different mental models for how softrware worked.

Another reason is a bit more philosophical and I'm correspondingly less sure about it. But, perhaps the spreadsheet was just a metaphor and model that really worked and connnected with people--and the alternatives were just more copmplex variations on a theme that generally detracted rather than improved. In fact, most of the "innovation" around spreadsheet replacements has since been replicated in mainstream spreadsheets (e.g. for multi-dimensional, think pivot tables). And a tiny percentage of spreadsheeters use any of these capabilities-at least on a day-in, day-out basis. Furthermore, like word processors, the spreadsheet had a familiar physical analog--the accountant's ruled sheet.

I'm not convinced that software patents are bad in toto, however flawed the current system may be. But to say that more patents would have spurred greater invention in desktop productivity software just doesn't have a historical basis. And, indeed, at least with the reality of today's overly broad patents, it seems likely that just the opposite would have been the case with every remotely-related product litigated.

Thursday, August 11, 2005

The Web 2.0 Debate

Overheated blog conversations often get more wrapped up in the terminology than the "thing" itself. I've written about past transgressions like folksonomies. Tim Bray tackles the debate around Web2.0. Spot on. There's meat here but lose the freekin' buzzwords.

The Four Seasons and Generic Luxury

I like reading reading View of the Wing, even if (fortunately from my perspective) I'm not a serious enough road warrior to appreciate or take advantage of a lot of my advice. Sometimes I have to laugh out loud though:
Clearly this is a Four Seasons, but which one? In their zeal to determine what Thomas Jefferson’s bedroom might have looked like if he had had electricity and modern plumbing, Four Seasons has stumbled into the sort of routinized design philosophy embraced by mid-market chains with out the wherewithal to spend tens of millions building a hotel. I can only assume that the Four Seasons interior design team was let go in a corporate downsizing and their last cruel act was to commit the company to a 20 year supply of fake chesterfield TV cabinets fitted with mini-bars.

To the list of really good Four Seasons properties, I'd add The Olympic in Seattle. (Especially nice after a week of climbing or backpacking. The last time I was there, I drove up with this dust-covered rental car. I'm sure it took all the doorman's poise to not back away and avoid messing up his nice uniform.

Risks of Blogging

There's a good interview with Sun's Tim Bray, Director of Web Technologies, and Simon Phipps, Chief Technology Evangelist (Love that title!). Tim says in response to a question about the risks of blogging:
Yes, there are potential risks. But at the moment, a year into this, I would say that we are seeing almost all reward and no downside, so whatever potential risks there are, none of them have come forth yet. As one of our smart legal staff pointed out when we were starting to work on the policies, if we had come to a lawyer 15 years ago and asked him what he thought about e-mails he would have been horrified at the idea!

Many in the computer industry have been using email for longer than that; when I joined Data General almost 20 years ago, CEO (Comprehensive Electronic Office--a minicomputer-based integrated email/word processing/calendaring package) was already an ingrained part of the culture. However, in my previous job in the oil drilling business, there was no email and every memo-which is to say pretty much all written communications--had to be signed off by the appropriate level of authority. So, I certainly agree with the sense of Tim's comments. There's been a lot of loosening up just about everywhere. The convenience of email is too great and it basically doesn't work if it has to always go through channels.

That said, a lot of these discussions take place in the context of the technology industry and the Coasts. This SF Chronicle article, for example, is generally very upbeat--but most of the cases that it discusses are in the Valley where companies are often looser than elsewhere. I'm not about to argue a "blogs are risky" meme, but counsel that the standards of the tech elite don't necessarily applky elsewhere.

Thursday, August 04, 2005

The Long Tail For Quants

As regular readers know, I studiously avoid a lot of what I consider to be "blogosphere" hype - including the term blogosphere and such favorites as folksonomies and tags. However, the "Long Tail" remains a particularly powerful concept for me, in part because it ties directly into the business models of the likes of Netflix and Amazon rather than just a generic "linkiness is good." Yes, the partilly ego-boo-driven reviews on IMDB, and Amaon, and Netflix are part of what make the Long Tail possible - as Irving Wlawdawsky-Berger describes in the comments to this post - but it's the sales themselves that are the Long Tail.

That makes data like this which quantizes the Long Tail particularly valuable. Individual data points that are arrived at in indirect ways are always a bit suspect, but there's an increasing body of research that indicates a Long Tail at online retailers like Amazon - in this case meaning titles ranking below the top 100K (or roughly comparable to large brick-and-mortar inventory - of in the vicinity of a quarter to a third of total sales. The latest data corrects earlier figures which indicated a possibly (much) higher number, but even a quarter of Amazon sales is still a substantial number if they can be delivered with modest incremental per-unit cost. (I also suspect that, at least relative to the Harry Potter books of the world, thee's a bit less discounting competition on the Long Tail which should at least partially offset higher costs associated with lower volumes. With purely digital goods, the Long Tail comes even closer to pure gravy.)

Wednesday, August 03, 2005

10 Years That Changed the World

It's a bit ironic that, these days, Wired remains one of the few magazines that I still receive in dead tree form. I like the mix of story and item content and length. And its sense of style helps keep me from just going online. This month's cover story, 10 Years That Changed the World, is written by Kevin Kelly who helped found Wired and was its first executive editor. It's one of the best single pieces that I've read about the years since Netscape's IPO, in part because Kelly crystallizes certain underpinning concepts and dynamics of the Internet and the Web with considerable clarity.

For example, on the way that the Web turned the creation of content on its head:
Problem was, content was expensive to produce, and 5,000 channels of it would be 5,000 times as costly. No company was rich enough, no industry large enough, to carry off such an enterprise. The great telecom companies, which were supposed to wire up the digital revolution, were paralyzed by the uncertainties of funding the Net...Netscape's public offering took off, and in a blink a world of DIY possibilities was born. Suddenly it became clear that ordinary people could create material anyone with a connection could view. The burgeoning online audience no longer needed ABC for content. Netscape's stock peaked at $75 on its first day of trading, and the world gasped in awe. Was this insanity, or the start of something new?

And he has some marvelous turns of phrase:
But if we have learned anything in the past decade, it is the plausibility of the impossible.

For an industry so accustomed to hype that almost every announcement is and was a bew paradigm and a world-changing event, this is why the mainstream Internet and the Web caught many of us unawares. So much seemed impossible. For lack of a better hook, the Netscape IPO was a "Day the Universe Changed" to use the title of James Burke's old BBC series.

Tuesday, July 26, 2005

Wikipedia Data and Anecdotes

I like both data (that is generalized data from lots of people) and anecdotes (specific data from a few). Her's a bit of both about a topic that I follow with interest-Wikipedia. Why the interest? Well, for one thing, I find it a personally useful resource. For another, I consider it a bit of a bellwether of interactive collaboration.

Here's the home page for the stats. Diving down a bit, it looks like there might be at least some suggestion that the rate of new articles generation is slowing (at least in English) although I'd hesitate to draw any real conclusions before seeing more months of data.

On the anecdotal side, Tim Bray points some of the usual problems:
Dave Winer's right, the Wikipedia's article on RSS is a crock. Dave's gripe is that it's "highly political", mine is that it's just wrong: for example, the introductory bit suggests that full-content feeds are impossible. Also, it's badly-organized. Dave's problem is going to be harder to address because RSS itself is highly political; but at least the political narrative should be coherent. Anyhow, it would be nice if someone level-headed were to take responsibility for it. I currently ride herd on two or three other articles and that's all my Wikipedia cycles. It's not as hard as you might think, and here's why: the kinds of people who want to put stupid, irrelevant, badly-written junk in the Wikipedia in my experience are easily discouraged. Just hang in, keep on fixing things they break and explaining why in a calm tone of voice on the Discussion page, and pretty soon they go away.

I'm not sure I fully share Tim's "it will work out" faith. That said, I think it reinforces the view of Wikipedia as a valuable resource-but not a totally dependable one.

Wednesday, July 20, 2005

Some More on Digital Archives

Although not directly on the point to the current Internet Archive case, this piecewritten by Adam Mathes, Graduate School of Library and Information Science, University of Illinois Urbana-Champaign has some interesting discussion about archiving software programs for preservations purposes--as well as current exemptions to the DMCA that aid in that effort.
In addition to processing issues, this brings up some of the legal issues involved in the collection. The mere act of copying these digital works, especially for the eventual purpose of enabling access on a different hardware platform, should arguably be considered a fair use. However, if these disks have "copy protection" schemes, even outdated ones that can be bypassed, care must be used to make sure the collection does not run afoul of the Digital Millennium Copyright Act (DMCA). Although recently Archive.org was given an exemption for particularly this reason it presents a considerable barrier and must be dealt with. (1) It may be helpful to amass multiple copies of the works in many formats to further bolster the legal backing to shift and archive the materials.

See also this reference:"Internet Archive Gets DMCA Exemption To Help Archive Vintage Software." Internet Archive. 2003. February 23, 2004.

Friday, July 15, 2005

The Internet Archive continued

In response to my last post, fellow analyst James Governor speculates that perhaps libraries like the Library of Congress might not be one possible analogy.

If, for purposes of argument, we consider the Internet Archive a library, that could well grant them some exemptions to the rights conferred to content creators by copyright law. Consider this from Section 108 of the U.S. Copyright Act:
§ 108. Limitations on exclusive rights: Reproduction by libraries and archives
(a) Except as otherwise provided in this title and notwithstanding the provisions of section 106, it is not an infringement of copyright for a library or archives, or any of its employees acting within the scope of their employment, to reproduce no more than one copy or phonorecord of a work, except as provided in subsections (b) and (c), or to distribute such copy or phonorecord, under the conditions specified by this section, if—
(1) the reproduction or distribution is made without any purpose of direct or indirect commercial advantage;
(2) the collections of the library or archives are
(i) open to the public, or
(ii) available not only to researchers affiliated with the library or archives or with the institution of which it is a part, but also to other persons doing research in a specialized field; and
(3) the reproduction or distribution of the work includes a notice of copyright that appears on the copy or phonorecord that is reproduced under the provisions of this section, or includes a legend stating that the work may be protected by copyright if no such notice can be found on the copy or phonorecord that is reproduced under the provisions of this section.

See also here and here for various pointers about copyright law as it applies to libraries. (By the way, nothing in there about the content creator having to give permission or being able to withdraw permission--count another strike against robots.txt having any significance in this case.

However, as with many things digital, I'm still a bit suspicious of physical world analogs. Not just because of the different nature of the media, but also the nature of the institution. We can all agree that the Library of Congress is a Library and that Widener at Harvard is a library and even little Thayer Library in my town of Lancaster is a library. And it's perhaps not too much of a stretch to see the Internet Archive as a form of library. But what if I were to declare my own little web site a library and compile Dilbert cartoons there? (I picked this as an example of content that's posted publicly but only for a limited time.) My guess is that Scott Adams might not approve. Yet, what makes the Internet Archive different in any fundamental way?

Thursday, July 14, 2005

Thoughts on The Wayback Machine Kerfuffle

The Internet Archive a.k.a. Wayback Machine is being sued by a firm called Healthcare Advocates for storing copies of old web pages. (See Good Morning Silicon Valley, for example.) These archived pages are causing the company heartburn in a separate trademank dispute so it's unhappy. Further, for some reason, the pages were allegedly stored in spite of being flagged with a "robots.txt" file to not be archived, cached, spidered, etc.

The case has generated the predictable throwing up of hands in disgust throughtout the online world. As Good Morning Silicon Valley's John Paczkowski succinctly puts it: "Uh, you published that information to a public medium ..." Now I'm certainly sympathetic with the Internet Archive here. At some level, the archiving and caching of publicly-displayed web pages seems almost part of the fabric of the Web and the way it works. However, I'm less convinced than some others that this is Much Ado About Nothing. I preface the following comments and observations with a standard "I Am Not a Lawyer"--and would welcome any on point case law that might be relevant here.

I think we can all stipulate that web pages and such are copyrighted material and freely displaying them to the public doesn't reduce or eliminate that copyright in any way.

I do agree with John that the robots.txt angle seems a wit wacky.
Why? The robots.txt protocol is purely advisory. It has no legal bearing whatsoever. "Robots.txt is a voluntary mechanism," said Martijn Koster, a Dutch software engineer and the author of a comprehensive tutorial on the robots.txt convention (robotstxt.org). "It is designed to let Web site owners communicate their wishes to cooperating robots. Robots can ignore robots.txt."

Ignoring robots.txt may be bad manners, but it's hard to see the legal significance. (There are perhaps analogs in physical trespass laws--posting your property and the like--but my understanding is that the details of such as typically goverened by explicit state and local laws.)

However--and here I perhaps stray into less charted territory--what exactly gives the permission to copy and archive web sites anyway? Certainly, there's no explicit permission like a negative robots.txt file that affirmatively gives the right to replicate, store, transmit, archive, etc. web pages. I suppose the theory is that there is some sort of implicit permission based on custom and social contract. Which seems a rather loosey-goosey state of affairs.

I can't think of any really good analogs here. Yes, I can record TV and radio--but only for my personal use. It's quite well established I can't put those recordings on a server for all to access. Usenet postings might be the most analagous situation; they're now archived as Google Groups and in more fragmentary form elsewhere. However, as far as I know, the legal status of Usenet and other types of online postings doesn't have much case law underpinning it. Furthermore, I think one could easily argue that such postings have a more explicit element of transmission of content out into the world--with the full knowledge that said content will be forwarded and stored for at least some interval--than Web pages which reside on a controlled site.

Nor can I see the exemplary historical service that the Internet Archive is providing with its activities having any bearing. "Preservation of the past" may be a social good, but it's got little to do with copyright law. After all, Abandonware has the same legal status as any other warez in the absence of the copyright owner's explicit permission to release it into the wild.

From where I sit, robots.txt certainly seems like a red herring in this case--given the lack of laws compelling its observence. But there's a much larger issue of caching and archive that seems to rest on very sandy foundations.

Monday, July 11, 2005

Podcasting Redux

Podcasting continues to be a beloved trend of the plugged-in elite. I've commented rather dismissively about it before. Since then I've spent more time checking out the various podcasting options--both software and content. Have I revised my opinion? Not really, I still think that there are some fundamental reasons why podcasting won't have the impact of text-based RSS. Which is not to say that podcasting doesn't have merits within a limited scope.

Chris Anderson at The Long Tail gives three reasons why podcasts aren't a big deal (yet).
  1. They don't have internal permalinks to section and subjects, so they don't get much link-love.

  2. They aren't searchable. How hard would it be for some service to run podcasts through a quick-n-dirty voice recognition program to autogenerate transcripts? They don't need to be exactly right; 80% accurate search is better than the 0% we've got now.

  3. They're meant to be consumed linearly, and pretty much at the (agonizingly slow and amateurish) pace they were created. Who, aside from trapped commuters, has time for that?

These comport with my impressions, but it's the linear consumption that's the real killer. David Winer, who wrote the RSS 2.0 specification, described the web as a "skimming" medium on this Steve Gillmor podcast and you can't really skim audio feeds effectively. As a result, you end up selecting a few favorite programs that you might listen to during audio-friendly periods--which is to say, typically driving in the car. And, guess what, as professional broadcasters like the BBC start putting content on the air, (e.g. "In Our Time") many--probably most--people will largely devote their limited audio-listen minutes to professionally-produced broadcasts. Call this podcasting if you like, but it's really just on demand radio as you can record more crudely today with software like Replay Radio. (By the way, NPR, get with the program!)

So when are podcasts good? I can think of a few things, both based on my personal experiences and things I've read about.

Business uses--for example, a weekly "broadcast" to a sales force. This is sort of a special case of the "content to listen to while commuting/driving.
Certainly, interesting interviews with the sort of specialists which don't make it onto mainstream broadcasts in any depth are interesting. Steve Gillmore and Dan Bricklin's podcasts are good examples of this. Talks at conferences are another good example--as at IT Conversations.

Thus, I certainly don't argue that podcasting is "bad" or useless. I like on demand listening and RSS syndication provides a handy mechanism to more easily (if hardly automagically) get updated audio content from favored sources to my car. But I'll continue to argue that it remains a largely peripheral trend rather than the "end of radio."

Tuesday, June 21, 2005

A New Age For Movies

Irving Wladawsky-Berger at IBM has recently posted a couple of eloquent blogs about how the Internet has made movie-watching just that much more enjoyable. First of all, there are all of the reviews and the commentary--such as as IMDB. Like Irving, I find Roger Ebert's reviews on target (alomost all the time). In fact, though it would probably horrify many in the "Ivory Tower," I think Roger Ebert may well be one of the best essayists writing today. There's something nice about, not only getting input on which movies to see, but after watching a film that really connected having a virtual "discussion" on various aspects of the film. What did such and such mean? How did other people react? It doesn't even require active participation.

Then there's Irving's discussion about Netflix. Walmart is out of the game as I wrote about earlier. The latest buzz is that, although Amazon has a movie rental business in the UK, they won't be following up with a US version. In any case, Netflix provides a great way to invite almost every available DVD into my house, without fuss and without muss. It's partly about eliminating "stuff" from my things-to-do list. (II'll be posting on Getting Things Done one of these days. Don't worry: I'm not evangelical about it.)

But it's also about being able to easily depart from mass market tastes and indulge an exploration or an oddball interest. It's The Long Tail if you want to be jargonny about it. But it's ultimately "On Demand"--when you want, what you want. (Which, perhaps, is why Irving is so fascainated by it.)

Thursday, June 16, 2005

Another Wikipedia Data Point

Although I've picked and probed at Wikipedia here and here, I find it a useful tool. But I think it important to understand its limirtations and boundaries. An interesting post from Nathaniel Ward at Dartmouth in that vein:
While doing research for the upcoming issue of The Review, I proposed to Executive Editor Scott Glabe that Wikipedia, a free communal encyclopedia, is a terrible resource. Since anyone can change any entry, a user cannot know whether the current version is accurate. Scott retorted that the encyclopedia's communal nature meant that all errors would be corrected in minutes.

It turns out I was right: it took almost 24 hours for users to notice and correct a deliberate error that made out Benjamin Franklin as the inventor of the printing press.
Although I have somewhat mixed feelings about Wikipedia "vandalism" even for valid research purposes and calling it a "terrible" resource seems overstatement, the data point still seems valid enough. (The "vandalism" in question inserted a common incorrect answer from The Dartmouth Review's recent poll of Dartmouth students about various Western cultural questions.) The highly democratic Wikipedia community implies both profound strengths and significant weaknesses.

Wednesday, June 15, 2005

Dartmouth Review 25th

This will be about as far as I get from IT here. But, given that it's the 25th anniversary of The Dartmouth Review, a newspaper I helped found, and that Dartmouth has just had a very significant trustee election, I guess I should say something.

TDR wasn't the first of the alternative right-of-center or libertarian college newspapers of the early eighties, but it became one of the more influential, if not one of the better mannered. I got involved with the founding through a friendship with Greg Fossedal who was the an editor of The Daily Dartmouth, the campus daily. I wrote about it long ago here. The precipitating event was the election of an "unoffical" trustee candidate. It's thus perhaps more than a bit fitting that we've just seen Todd Zywicki and Peter Robinson elected as the latest slate of trustee candidates running in opposition to the official ones. (T.J.Rodgers, the CEO of Cypress Semiconductor was elected in opposition to the administration in 2004. It has not been a good couple of years for the official slate.) Their positions haven't been political per se. Rather, they've spoken to directions that Dartmouth should take and principles that Dartmouth should have.

I won't go further into the arcana of Dartmouth governance and policies here. Trustee disputes at Dartmouth go back to the Dartmouth College case, argued by Daniel Webster, and a seminal US Supreme Court case in contract law. Suffice it to say that it's somewhat satisfying to see an institution like TDR still involving students 25 years later. It's never been about shadowy outside funding sources-rather TDR has always been supported primarily by Dartmouth's own alumni-but, rather, about students thinking and writing and questioning conventional wisdom. Which seems for the good. Wah-Hoo-Wah.

Tuesday, June 14, 2005

New PVR Blog

As some of my readers know, I've been fooling around with building up personal video recorders for quite a while now. In general, I still prefer my TiVo, which just takes less mental energy to use on a day-in, day-out basis than PVR software running on a general purpose computer. However, that said, BeyondTV from Snapstream has matured considerably and is now quite stable and functional. I still don't find it to be the "just works" experience that my TiVo provides but it's a good, if not great, PC-based alternative and is, in any case, the best PC PVR software I've run across.

Snapstream now has a company blog, which is worthwhile viewing if you have an interest in PVRs and, certainly, if you're a Snapstream customer. PVRblog is must reading for more general coverage of the topic.

Now That's An IT Failure!

Not traditional IT to be sure, but an automated system nonetheless--the infamous Denver baggage-handling system. I still remember the first time I travelled through Denver aurport with its new system. You had to find a special line to stand in if you had odd-shaped luggage. Odd-shared luggage like... skis. In Denver. Hmm.

Well, it's no more.

United Airlines has decided to stop using its controversial automated baggage-handling system at Denver International Airport, reverting to a conventional manual system by the end of 2005. The automated system (which began operation in 1995) never lived up to original expectations. It had enormous difficulties in its early days, including construction delays, cost overruns, lost bags, damaged luggage, derailed cars, traffic jams, upgrade problems, political battles, and so on. (For example, see RISKS-17.61 and 18.66). United is apparently obligated to pay $60 million a year for another 25 years under its lease contract with the city of Denver (which owns the airport). However, United expects to save $1 million a month in operating costs by NOT using the automated system. The airport cost $250 million to build (BAE Automated Systems of Dallas, no longer in existence), and the city reportedly put up another $100 million for construction and $341 million to get it to work. [Source: AP item, 7 Jun 2005; PGN-ed] http://msnbc.msn.com/id/8135924/

via RISKS Digest

Friday, June 03, 2005

Recovery through genocide?

Good freakin' Lord. Why not just lay everyone off while they're at it? History may or may not show Sun's acquisition of StorageTek to ultimately have been a good move. I think it's potentially a significant win but won't be easy to pull off. (See my comments at IBD and ZDnet. My Illuminata colleagues have also posted in a similar vein.) But, whatever the doubts and concerns about the StorageTek approach, getting rid of all Sun's employees hardly seems a viable recovery strategy either.
"We do question the rationale of a transaction which reduces Sun's cash hoard by 40 percent, and does nothing to re-ignite revenue growth or profitability," Steven Fortuna, an analyst at Prudential Equity Group, said in a research note. "We would rather have seen the company buy back a billion shares and fire 10,000 people."

Tuesday, May 31, 2005

Taking Notes

Not to, in any way, equate myself with Fischer Black (of Black-Scholes option pricing fame), but I was struck by a recent posting in Marginal Revolution about how he took notes:
He did almost all of his work in an outlining program called ThinkTank, which he used as a kind of external associative emmory to supplement his own. Everything he read, every conversation he had, every thought that occurred, everything got summarized and added to the data base that swelled eventually to 20 million bytes organized in 2000 alphabetical files...Reading, discussion and thinking that Fischer did outside the office was recorded on slips to paper to be entered into the database later. Reading, discussion, and thinking that took place inside the office was recorded directly. While he was on the phone, he was typing. While he was talking to you in person, he was typing.
I'm not quite back at the ThinkTank stage--even if I remember the program (a "terminate and stay resident (TSR) outliner that essentially simulated multi-tasking in a single-tasking world)--although my sometimes preference for text editors over Mind maps may seem in a similar vein. However, such assidious notetaking does reflect how many of us work as we transition to a world in which our role is increasingly to synthesize vast amounts of data. It's really handy to have this sort of external memory in accessible and searchable form.

When I started as an analyst, I took old-fashioned longhand notes. I'm an enthusiastic convert to the electronic sort. t's just so handy during a briefing to pull up the notes from a peior meeting and ask: "So, when we last met, you said that XYZ was going to be the next big thing. Whatever became of that?" :-)

Convergence

The term "convergence" may be passe these days, but the question of which devices will fold into other devices is still of immense interest to a lot of people. Especially because it's all more than a little bit mysterious. I suspect that's in no small part because the (in many cases) engineers trying to figure all this stuff out are being entirely too rational about it. They tend to ignore style and fashion, which aren't exactly rational after all.

Let's look at a concrete example. Why haven't MP3 players folded into cell phones in a bigger way. After all, cell phones are ubiquitous and you don't need more than a bit more memory and another chip or two to make them do double duty as an MP3 player. Sure, there are some business issues--the carriers subsidize cell phones and aren't going to be exactly thrilled about subsidizing other types of gear that doesn't bring them services revenue. And some practical ones--does everyone want to drain their phone's batteries playing songs?

But we're heading down the rational path. I was informed by a college freshman of some much more fundamental reasons this past weekend: iPods are cool and small cell phones are cool. The mathematical corollary, I suppose, is that big cell phones playing MP3s are decidedly not.

Q.E.D.

Folksonomies?

John Dvorak's been writing about computers since the early days of PCs. Does he like to be controversial? Sure. That's his thing. And is he sometimes WAY off base? That too. But that more or less comes with the territory when you've been writing for a long time about a wide span of topics--at least some of which you aren't exactly an expert in.

Dvorak's been heating up the Linux zealots of late with his reportage of the PJ-O'Gara tiff. Seems pretty accurate, if a bit over the top. However, it's his latest critique of "tags" that really caught my eye. With all due respect to all the fans of high-tech "blogosphere" linkages, but John nails the hype.
The "folksonomy" notion is the bloggers' last hope of invention, although it's a rewrite of the prebubble "semantic Web" technology at best. And it too is doomed to failure. The utopianism and idealism that exist in the online societies ignore the real problem with tags, metatags, übertags, folksonomies, and the like. This is because they honestly think that most people are goodhearted. The online world, because of its anonymity, encourages bad behavior. "You suck!" is a common post, and it would be the number-one tag if tagging ever became popular. Then would come the tags about "Online Casino!" One site promoting folksonomies is the darling of the bloggers: Flickr.com—an excellent photo-sharing site where being in perpetual beta is a marketing tool.

Interestingly, the ever-insightful Clay Shirky seems to at least sort of endorse the idea of tags in a recent post although he has skewered the "semantic web" in the past. I suspect that there's a bit of a definitional issue going on here. But that's the problem isn't it? When tags become both everything and essentially nothing (i.e. keywords), they lose much of their significance.

My current feeling is that extensive linking and the fact that digital documents have no need for a single physical place means that keywords are preferable to rigid hierarchies. But to extrapolate from there to deep physical meaning for those keywords across individuals and communities seems a bit much. Keywords (or tags if you prefer) yes. Folksonomies, no.

Thursday, May 26, 2005

BBS Documentary

For the young 'uns that's Bulletin Board Systems.

Before the Internet was democratized, there were BBS's. At their largest, they were big multi-line operations. My service of choice was Channel 1 Communications in Cambridge, MA. Communications software like Telix and off-line new readers like Qmail let people snarf the contents of various discussion forums onto their PCs. After all, telephone rates were higher back then. BBS's were also where Freeware, Shareware, and less savory files and programs were exchanged. (Mostly legal in the case of the big operations, less so in the case of smaller and shadier ones.

Compared to today's Internet, BBS's could be much more intimate and could even be the hub of a sort of local community given that the reality of phone costs tended to keep a lot of the membership relatively local. (The larger BBS's also participated in various networks of discussion boards but they also had non-networked boards strictly for the "locals"--i.e. those who dialed in directly.)

There's now a documentary out about those days. I haven't seen it and haven't heard any reports, but if it's any good could be quite the nostalgia trip for some of us. I've thought of doing a book on some of the social communications history of the computer age but I haven't made any real progress on doing so.

Monday, May 23, 2005

So Apple's Jumped On the Podcasting Bandwagon

Actually it was more like a ginger step, but the Steve says that iTunes 4.9 will get podcasting support. I suppose it's the least they could do given that they have this buzzed-about phenomenon further building up their iPod brand even though podcasting has basically nothing to do with either Apple or the iPod.

Make that overhyped buzzed-about phenomenon. I'm going to be a party pooper here and suggest that podcasting is not getting anywhere as big as ordinary blogging. Which, by the way, is overhyped too but does seem to be a legitimate phenomenon albeit one that's not as widespread or influential as some of its practioners think it is. But back to podcasting. Why the yawn. Let me posit a few reasons:
  • It's harder to do well than written content. There are lots of technical issues as well as strictly content-related ones.
  • Power Laws are going to make it real difficult for thousands of broadcasters to find an audience. It's just much harder to "skim" through audiocasts in search of gems than it is through regular blogs. I have to be pickier.
  • It's not exactly hard to get a podcast onto your portable flash memory music thingamajig. But it does take several steps more than just turning on the freekin' radio. When we get to the point that your home computer handles this all automagically between itself and its counterpart in your car, OK. (But then will we just want on demand versions of more commercial fare for the most part?)
I have to admit that I don't like even professionally-done talk radio for the most part. OK, maybe that's a big reason that I'm pretty indifferent about podcasts. But I think there are other reasons to think they won't be the "next big thing" too, Wired covers notwithstanding.

Friday, May 20, 2005

Illuminata Perspectives is Live

In addition to here on Connections, I will now also be posting about IT topics to Illuminata Perspectives. Posts that veer close to mainstream, datacenter IT will tend to migrate over there, but most of what I've been writing about--social connections, photography, etc.--will remain here. Collaboration will likely bridge between the two somewhat.

Thursday, May 19, 2005

Time To Revisit Micropayments?

"Micropayments" were part of the Boom's lexicon. But they never really went much of anywhere. Instead, everything was "free"--either in the frequently forlorn hope of charging later or in the often deluded hope that it would be paid for by advertising or some other deus ex machina.

We've ended up in this sort of binary state as a result. The free (perhaps with annoying registration) and true subscription. That's a bit of an oversimplification of course. A site like Nerve has both free and premium content. A site like Salon offers various forms of free access in exchange for watching commercials. But, close enough for government work. But, in general, it holds. We've got sites that are largely free and sites that charge subsriptions--often in the neighborhood of $25 to $50 per year--that exceed what I pay for most of my magazines.

And $1 an aricle? Forget about it unless your audience is mostly well-financed paid researchers. That's not a micropayment. That's a midi-payment--just like buying a song is. Perhaps not a major purchase, but something that you'll think about, or as the ever-interesting Clay Shirky calls it, a transaction cost. (I'm not sure that I agree with Clay that free is the only way to go, but I certainly agree that payments large enough to make you think about them are a real inhibitor.)

Perhaps we need to look again seriously at micropayments. In the cents per transaction range--and, perhaps, implemented as subscriptions that span sites rather than per-transaction decisions. Hard? Sure. But the alternative may well be a dichotomy of free sites and sites that hardly anyone has access to or reads.


Score One for the Littler (But Not So Little) Guy

So Walmart is getting out of the online video rental business in favor of a partnership with Netflix. Blockbuster then promptly took the opportunity to start backing off its recent round of price cuts--which certainly undercut Netflix but were also doubtless aimed at keeping its close to Walmart's rates, which were lowest of all.

I don't shed any tears at Walmart exiting this business or at a certain softness in its overall fortunes. There's a lot not to like about the company as a whole (however much I appreciate the low prices when I shop there myself) and it's hard to see that their online video rental business would ever have been more than a very mass market, lowest common denominator, compete-on-price offering that made like difficult for other companies with more interesting and broad-based services. It's nice to see that Netflix is apparently able to stand up to challengers that could have potentially steamrolled it. (Which is not to say that Netflix is exactly raking in huge profits.) I'm a big fan of their service.

This does seem to be a case where the Internet boom mantra of "Spend to capture mindshare" has more or less played out. Netflix now has the brand (along with a competitive service) and its apparently going to be hard to displace. I think there are a few reasons for that in this case:

  • Scale matters. Without multiple distribution centers, it takes too long to send and receive movies. For an East Coaster like myself, this was a frustration with the early Netflix which had only a single DC in Los Gatos, CA.
  • Scale also matters to selection. Perhaps there's an opportunity for a company that deals only in high volume, mainstream fare. But there's a lot of aggregate volume in the less popular titles too--"The Long Tail" popularized by Chris Anderson of Wired.
  • And, if you're going to have scale, and a broad-based selection, how much more differentiation is possible? Perhaps someone will figure out an alternative way of doing things. (Distribution by broadband will presumably be such an alternative someday; Movielink and its ilk are not not meaningful competitors today.) Perhaps pay-per-movie alternatives to subscription. But the current pricing schemes seem popular and if any such alternative did click with consumers, it would be easy to quickly replicate.


All of which leads me to think that this business isn't favorable to having a lot of niche companies playing in it. (Porn being, of course, the exception given that it's big business, its boundaries are fairly well defined and mainstream companies--especially public ones--want nothing to do with it.)

Wednesday, May 18, 2005

The Expanding Linkosphere

It seems like every day we're seeing various information that we've stored away for our own benefit being open up and linked to others in new ways. Now, I'm not referring here to clearly personal data that's being disclosed and compiled against our wishes--inadvertently or otherwise. That's a different topic. Rather, I'm referring here to how we're steadily opening up our wishlists, our bookmarks, and our movie rental lists for all the world to see.

A whole crop of tools has sprouted up to syndicate Amazon wishlists. "I want this, what do you want?" ("Or, please, buy me this!") What started out as more of a personal organization tool--the list of things that I might want to buy someday--is increasingly something to show off to others. del.icio.us and its brothers are even more explicitly communal. They provide a place on the web to store and organize your bookmarks--an increasingly useful concept in these days when people commonly use multiple computers and tyerminals from a panoply of locations. But, in exchange, you bookmarks are exposed as are the way you categorize them using tags - essentially keywords, but explicitly communal.

Now Netflix is the latest to provide an option to make the individual organization part of a collective pool of preferences and predilictions. You can now invite friends to share their movie list queues with you and you with them. Know someone whose taste in film you like, or at least intrigues you, share a list with them. Cool idea.

At the same time, it's impossible not to think that we're rushing pell-mell into throwing a huge amount of at least moderately private information here. Perhaps as Sun CEO Scott McNealy famously said once: "You have no privacy, get over it." much to the consternation of privacy advocates everywhere.

To be sure, that line was a typically McNealy-ian one line zinger; elsewhere he's spoken on the subject in a more nuanced way. But his zinger as a clear kernel of truth as well. The plugged-in are, for the most part, far less private than they've ever been before.

Tuesday, May 10, 2005

The Sorry State of Collaboration

Stephen O'Grady writes here:
One of the reasons I think applications like Trumba are important is that I think calendar applications are an area with a lot of potential in the near term.

a.) There's been essentially zero innovation in them in recent years,
b.) We've all got complicated schedules and
c.) Most of would like some mobile integration (cell phone at a minimum)
That seems about right. I'd go further though and say that there's been essentially no innovation in any of the core "collaboration" apps. I use the term "collaboration" advisedly because it strikes me as a rather overblown and pretentious use of the word given the sorry state of affairs in software that's supposed to help us work together--especially at remote locations.

What advances we have made have come neither from traditional productivity apps stretching themselves (unsuccessfully for the most part) into a more multi-person context nor form the blaoted, monolithic software that passes for serious collaboration tools. The latter may be useful, or even necessary, in some environments for regulatory and other reasons--but it's hard to see them in the mainstream.

Indeed, the most genuine collaborative innovations have come from the outside. They're lightweight and modular. They're IM and blogs and all the other pieces of software that real people use to communicate and commiserate.

A Hilarious Spoof...

of high-tech marketing creative.

Wednesday, May 04, 2005

Life is Random...

or at least multi-dimensional.

Tablets are an idea that refuses to either truly live or truly die. It's an idea that certainly has its enthusuasts. Most recently, James Governor of RedMonk weighed in on how Tablets could essentially be a Microsoft killer app in Microsoft: Putting Coolaid in Tablet Form.
It seems like the Tablet PC is one of the few things Microsoft can do (put in people's hands) that just stops people dead and cuts through prejudice. Tablet PC is disarming. Its funny to watch begrudging envy; is there a word for the opposite of schadenfraude?
The big issue here is that theTablet concept fills a very basic need. Us modern folks who have been using keyboards for a long time can type much faster than we can write. Indeed, my handwriting--in addition to bewildering even myself much of the time--cramps my hand and s incredibly slow by comparison to my (non-touch-typed) typing. But that typing is essentially linear--one-dimensional.

By contrast, when we take notes or sketch out ideas, we're all over the page--essentially two-dimensional.

See the problem? Great modern thought organizational techniques like mind maps are essentially 2-D while our fast input mechanism is 1-D.

That's why I consider tablet PCs a great unfufilled promise. Someone's going to ultimately crack the keyboard-tablet hybrid code and then it's going to be "You mean, people didn't always build PCs that way?"

In closing, I have to disagree with my buddy James on OneNote. Maybe it's a good app for Tablets but I don't see it as an underappreciated app for normal PCs--for reasons that I covered here. Microsoft: even if you have to look ugly, lose the proprietary format or at least provide a convenient export option.

Wednesday, April 27, 2005

The Nikon RAW Format Rumpus

There's been considerable noise from many quarters over the Adobe vs. Nikon tiff about Nikon encrypting the white balance data in its model D70 RAW files. Adobe yelled that it wouldn't support Nikon's RAW format in Photoshop. Nikon came back with a not-entirely-satisfactory offer to make the SDK available to some developers.

My intent here isn't to recapitulate the debate here--which now appears to be at least somewhat overblown anyway. (For the edification of non-photographers, RAW is a camera (or least sensor)-specific format for the digital data captured by the sensor. Because it hasn't been further manipulated or compressed in a lossy way (a la JPEG), it's the highest quality way to store images on camera that support it.)

I do find one aspect of this mess particularly troublesome, however. And it's not Nikon's lack of openness--which is a boneheaded PR move that I can't see benefiting them. (Nor am I convinced that Adobe's motives in taking this public were necessarily pure.) Rather, it's the fact that Adobe could credibly invoke the DMCA (Digital Millenium Copyright Act) as the reason it couldn't decrypt Nikon's format.

I'm not a lawyer, but I'm far from convinced that the DMCA--and specifically its decryption provisions-- would apply here. After all, it's the metadata for the photographer's own data (picture) that's being decrypted. But, the fact that Adobe's can claim concern about violating the DMCA--and people widely accepted that concern as valid--should be concerning.

We don't really know what a judge or a jury could decide are the limits of the DMCA. Certainly "DMCA" gts thrown around by the anti-IP crowd as a bogeyman on the order of RIAA--but that doesn't mean it's right either.

Monday, April 25, 2005

Considering (ID3) tags

I've had occasion to be thinking about certain types of metadata recently. Actually, that's a somewhat pretentious and jargon-y way of stating it. To be both more precise and more colloquial--always a good combination--I've been working on tagging my digital music collection. Some of my findings and experience are doubtless specific to digital music or to my personal requirements and priorities, but I think much of it probably applies more broadly.

For example, why tags in the first place? Well, because there's a bloody lot of data--songs and other sound clips in this case. As a result, while you may want to make hand-crafted playlists for some situations, for your day-to-day background music you might well want the computer to do some of the work. And that means tagging the songs with relevant characteristics that an be mainpulated with some relatively simple rules to create playlists. The analogies aren't perfect with other sorts of media, but, in both cases, neither totally manual selection nor completely unaided computer search are completely effective.

I'll go into the specifics of what I've done and what I'm doing with my music collection in a future post. However, let's first consider some of the more general characteristics of such a scheme. I wish I could pretend that this was a top-down analysis. In reality, it's more like trial and error--and is still ongoing. Be that as it may:
  • Eschew unnecessary complexity. With multiple thousands of songs et al. in my database, each additional field could mean a lot of work entering data. If that field isn't going to be used effectively to create playlists consider ommiting it.
  • Automation is our friend. To the degree that a utility or your jukebox program can auto-fill a field, that's a big win. For example, J. River Media Center, which I use, can populate an "Intensity" field and a "Beat" field. Even though I've found that these computer-generated fields correspond only modestly to my personal perceptions of these attributes, they're essentially "free."
  • Build off the "standard" ID3 tagging infrastructure as much as possible. Unfortunately, once you get beyond the standard artist, album, etc. fields (that is, the truly stanardized ID3 tags), programs start having a lot of trouble interchanging the information. Even the seemingly standardized "Rating" tag isn't. My J. River Media Center can interchange rating information with my iPod, but a lot of tag editors don't seem to see the rating tags that it generates. Thus, for example, if you want to create a "subgenre" tag, you may want to consider keeping the standard "genre" tag and using something like an existing "keywords" field to hold the subgenre data.
  • Use fields that you can fill consistently, meaningfully, and without too much mental effort for each choice. For example, I've been toying with a "Mood" or "Situation" field but have had trouble filling in entries in a consistent way that I could then meaningfully use to build a playlist.
  • For anything like genre, subgenre, mood, etc., draw out a taxonomy or set of choices that you intend to use. Modify as required but at least you have a starting point.
Anyway, that was my retroactively arrived at starting point. More specifics coming.

Friday, April 22, 2005

Adobe-Macromedia Humor

This Translation From PR-Speak to English of Selected Portions of Adobe’s ‘FAQ’ Regarding Their Acquisition of Macromedia is both pricelessly funny and spot-on. I just want to know why, in this day and age, an "industry leading" program like Adobe's GoLive (which does admittedly do many things rather well) can get away with still being so crash prone after all these years.

Wednesday, April 20, 2005

Some Additional Good License Commentary

from Tim Bray as well.

I likewise have wondered all along about the dual-licensing option (i.e. license OpenSolaris under both the GPL and CDDL). It's a complicated issue though. How would the patent indemnification work under those circumstances? Would Solaris just be strip-mined for the benefit of Linux? (And should Sun ultimately care?) Would it be compatible with third-party closed-source linked to OpenSolaris?

In any case, it's clear that Sun and CDDL have prodded the GPL-is-the-only-true-Open-Source-license forces into action.

The Open Source license debate continues

Simon Phipps over at Sun has, what seems to me, a particularly nice overview of the three main Open Source license "families."
  • BSD-style licenses (of which I regard Apache v2 as the state-of-the-art) place no restriction on whether derived creations are returned to the commons. Thus the creative works of the community surrounding a commons created by a BSD-style license may be returned to that commons, may be applied to a different source-commons1 or may be incorporated into a closed-source work.
  • GPL-style licenses require that derived creations, both resulting from the original commons and created newly around the commons, be licensed under the same license. The commons is thus enriched as it is used, but innovations created outside the commons can very easily be found to be licenseable only under the GPL and thus need to be compulsorily added to the commons - the artisan will often find that there is no freedom of choice in this regard.
  • MPL-style licenses (of which I regard CDDL to be the state-of-the-art at present) require that creations derived from files in the commons be licensed under the same license as the original file, but allow newly-created files to be licensed under whatever license the creator of the file chooses. This is a win-win; the commons is continually enriched, and the artisan retains the freedom to license innovations in any way that's appropriate.
This seems to capture things pretty well. I've actually changed my own thinking a bit. I've tended to look on the MPL (and CDDL) as part of the BSD "family" of licenses contra the GPL. I think the main criterion I was considering was friendliness to intermingling with proprietary code--which you can do with both BSD (essentially without restriction) and the MPL (so long as you keep the source code files separate).

I think that view's still valid, but a view that explicitly recognizes that the MPL and CDDL do require giving back to the commons at some level is more complete in important ways. Thus, I like this three bucket taxomy.

Wednesday, April 13, 2005

If We Buy It, Will Use Come?

I'm sure that we've all had the experience of buying something that we just had to have and then, after the excited opening of the UPS box and the equally excited showing off to your many (bored) friends, the long sunset of said device on the dusty shelf. I know I have.

I've come to the conclusion that personally applying the sort of "use cases" that companies like Intel try to apply to their designs can be helpful. Basically, this boils down to "what are you trying to do?" and "how would you use device X to accomplish this?" Some of ths same sort of thinking is described in Alan Cooper's The Inmates Are Running the Asylum.

That's all very theoretical. So let's look at a specific, personal example. Last time I had a new cell phone to buy (the old one was slipping into a coma), I got to thinking about a smartphone. It would be a bit pricey as would the services, but it would be cool. Time to think about usage models.

OK. I travel a fair bit and like to stay in touch. A smartphone sounds like the ticket, but what would it incrementally do that was important to me? Hmm. Email of course! OK, but I already have broadband at just about any hotel I'm staying at and at most conference sites too. I could even have it at the airport and at Starbucks if I were willing to pay the fees (which would doubtless be less than the smartphone data services would be). I have a nice, compact laptop (a Fujitsu P5000) that I always travel with. A laptop which, by the way, is far more friendly to navigating web sites and dealing with attachments than any smartphone would be.

I could go on with the personal details but, although they're relevant to me, they may not be for you. Cost tradeoffs, for example. This would have been coming out of my pocket. The bottom line is the value of trying to work through how you would use a given device that isn't already met in other ways.

Of course, it can sometimes be hard to predict or grok radically new usage models in advance. Especially when it comes to communication, so much depends on what others in your circle do. If text messaging is the norm, you text message. If it isn't, you don't.

That said, usage cases are at least a framework. If you can't think of a practical (or fun!) reason why you'd use a new gadget or piece of software to do things differently--you probably (though not certainly) won't.

Monday, April 11, 2005

The Centralized Ideal

So many of our network-inspired ideals imply--directly or indirectly--a hankering for a neat-and-tidy cenralized sense of authority and control (or at least a singlular guarantor of great QoS).

Historical sources and anecedents made such centralized command-and-control explict for philosophical reasons. Edward Bellamy's citywide pneumatic tube systems--contra the relative anarchy of the later real-life bicycle messengers of a later era--were at least the product of an explictly socialist utopian fantasy.

However, such an almost Jungian yearning for authoritative centralization also comes from less explicitly politically sources. Whether Isaac Asimov's Encylopedia Galactica from his Foundation Series or Jerry Pournelle's descriptions of brain implants that could ask any question of some (presumably) "knows-it-all" computer (Oath of Fealty), the implicit assumption has always been that there's a neat and correct source of all information.

We (hopefully) realize today that the reality is messier. There's no Encyclopedia Galactica; there's Google. Google (together with its sister search engines) give us the keys to the Web's treasures. Of course, they also give us acces to the "Net of a million lies." Nothing neat there.