Wednesday, November 30, 2011

Posting HTML from Pinboard

I finally jumped off social bookmarking site Delicious during the whole kerfuffle over their problem-laden relaunch and moved to Pinboard on the recommendation of @sogrady and others. I had previously setup an account when the then parent of Delicious, Yahoo, began to send signals that it was shutting the service down. I now switched full-time.

The problem was that some Javascript code I was using to easily generate HTML code from my bookmarks and notes for pasting into Blogger stopped working.(See, for example, this post.)  This was actually the reason that I didn't switch to Pinboard full-time when I first setup the account.

I'm far from a Javascript wizard and I had received the code indirectly through a friend so the issue wasn't immediately obvious. It should have kept working; the API's were supposedly compatible except for some documented format changes. But it didn't work.

However, with my full-time switch I had to do something so I dove in and started hacking at the code. What appears to have been the problem was that, with the old Delicious (the new Delicious broke the code too), an object with global scope was being instantiated by the JSON call that retrieves recent bookmarks. Copying the JSON object into a global variable seemed to fix the problem. This doubtless should have been obvious.

In any case, if you use Pinboard and ever have occasion to create HTML from your bookmarks for posting somewhere else, you may find this code useful. You just need to substitute your username into the JSON URL where the comment tells you to. You can also customize in other ways as shown on the Pinboard HowTo page.

pinboardlinks_template.html

 

Links for 11-30-2011

Big Data: Asking the right questions

In this interview by Mac Slocum over at O'Reilly Radar, Alistair Croll, Strata online program chair, weighs in on the differences between Big Data and more traditional Business Intelligence.
Big data is a successor to traditional BI, and in that respect, there's bound to be some bloodshed. But both BI and big data are trying to do the same thing: answer questions. If big data gets businesses asking better questions, it's good for everyone.
Big data is different from BI in three main ways:
  1. It's about more data than BI, and this is certainly a traditional definition of big data.
  2. It's about faster data than BI, which means exploration and interactivity, and in some cases delivering results in less time than it takes to load a web page.
  3. It's about unstructured data, which we only decide how to use after we've collected it and need algorithms and interactivity in order to find the patterns it contains.
When traditional BI bumps up against the edges of bigfast, or unstructured, that's when big data takes over. So, it's likely that in a few years we'll ask a business question, and the tools themselves will decide if they can use traditional relational databases and data warehouses or if they should send the task to a different architecture based on its processing requirements.
What's obvious to anyone on either side of the BI/big data fence is that the importance of asking the right questions — and the business value of doing so — has gone way, way up.
Croll's last point--that asking the right questions is critical--bears highlighting. There are many reasons that traditional data warehousing and business intelligence has been, in the main, a disappointment. However, I'd argue that one big reason is that most companies never figured out what sort of answers would lead to actionable, valuable business results.
After all, while there is a kernel of truth to the oft-repeated data warehousing fable about diapers and beer sales, that data never led to any shelves being rearranged.

Tuesday, November 29, 2011

Podcast: Red Hat's Kim Palko talks messaging and AMQP

Messaging used to be the province of expensive, proprietary software but--as in many other areas of the software universe--that's changed with open source. And the AMQP protocol provides a particularly good example of the open source development model in which innovation comes from customers as well as vendors. I sat down with Kim Palko, Red Hat's Senior Product Manager for messaging, to talk about the latest happenings with AMQP.
Topics we cover in this podcast include:
  • How customers use messaging products
  • What AMQP is and why it's generating so much buzz
  • Red Hat's MRG-Messaging implementation of AMQP
  • How Red Hat's OpenShift PaaS uses MRG-M
  • The new JCA resource adapter
Listen to MP3 (13:49)
Listen to OGG (13:49)

Red Hat's Doug O'Flaherty discusses Supercomputing (SC11): Big data and more

As far as I'm concerned, the Supercomputing show may just be the most interesting computer industry trade show. I didn't make it this year--the content is a bit less central to my day-to-day interests than in the past--but I did have the opportunity to record an interview with fellow Red Hat marketeer Doug O'Flaherty who did make it out to Seattle.

Topics we cover in this podcast include:

  • The big announcements and themes
  • What Red Hat was up to at the show
  • The cool hardware
  • Big data
  • How the TOP500 list has evolved

Listen to MP3 (6:57)

Listen to OGG (6:57)

Links for 11-29-2011

Tuesday, November 22, 2011

Links for 11-22-2011

Monday, November 21, 2011

Links for 11-21-2011

Friday, November 18, 2011

The Best Couple of Paragraphs Written About Tablets

The context of these paragraphs written by Marco Arment, the creator of Instapaper, is a negative review of the Kindle Fire. But they also capture why so many people either: 1.) Think anyone who buys an iPad must be some sort of uncritical Apple fanboi or 2.) Think anyone who isn't wowed by the iPad must be some sort of uncritical Apple hater.

A tablet is a tough sell. It’s too big for your pocket, so you won’t always have it available like a phone. It’s too small to have rich and precise input methods like keyboards and mice, and its power and size constraints prevent it from using advanced PC-class hardware, so it’s probably not going to replace your laptop. It’s just one more gadget to charge, encase, carry (sometimes), care for, and update. And it’s one more expenditure that can easily be cut and done without, especially in an economic depression.

“Tablets” weren’t a category that anyone needed to give a damn about until the iPad. It was a massive hit not because it managed to remove any of the problems inherent to tablets, but because it was so delightful, fun, and pleasant to use that anyone who tried their friend’s iPad for a few minutes needed to have one of their own.

 

Tuesday, November 15, 2011

Links for 11-15-2011

Monday, November 14, 2011

Links for 11-14-2011

Friday, November 11, 2011

Links for 11-11-2011

From Building a Cloud to Operating It

First you build your cloud. A week back, I shared my thoughts on how best to do so based on my keynote from the Red Hat Cloud Tour. But that's just the first step. Once your cloud is in place, you need to operate it.

What makes this less than straightforward in a typical enterprise IT environment is that there's a balancing act in play.

 

Users want the simplicity they get from public cloud providers. They want self-service. They want to be in control. They don't want to think about underlying infrastructure. They want things to just work. In short, they have expectations set by the consumer Web and by the plethora of "magical" iDevices that they increasingly bring to their day jobs.

Historical enterprise IT sat largely in opposition to these user desires. Applications focused on business processes rather than user interaction. Minimizing risk and cost was equated to minimizing user choice of user choice. And a myriad of unavoidable regulatory, compliance, security, and audit needs meant that laissez faire attitudes to where applications ran and data was stored were a non-starter.

Balancing these two sets of desires and requirements requires four capabilities, all of which Red Hat provides:

  • Self-service with rich policy
  • Application lifecycle management designed for the cloud
  • Application portability across clouds
  • Proven stack and ecosystem delivering enterprise-class SLAs in the cloud

 

Self-service is a sine qua non of cloud computing. It's fundamental to eliminating friction between users requesting a service and the IT infrastructure providing that service. Business processes and workflows also need to support rapid servicing provisioning of course; associated manual approval requirements adding days or weeks blunt any positive technology impact. However, even reasonably automated provisioning processes that require admin intervention can add significant latency and limit scalability.

The key in an enterprise context is pairing this self-service to a rich set of policies. Policies specify which standard operating environments (SOE) a user or group of users have access to. They specify where those SOEs may physically run, perhaps based on whether they're being deployed for dev/test or whether they're being put into production. Thus, for example, policies could allow a service to be deployed to a public cloud while it's being developed--using test data--but require that production applications working with customer data be run on-premise.

 

Traditional enterprise management was "heavyweight." It focused on relatively static environments that had, as their core, large, proprietary legacy servers ministered too by a cadre of specialized sys admins. A cloud environment, on the other hand, is highly dynamic. Workloads are more typically scale-out. They are mobile, often running at different locales at different points in their lifecycle. Application lifecycle management for the cloud needs to take these differences into account. The System Engine component within Red Hat's CloudForms Infrastructure-as-a-Service cloud management software was designed with such cloud requirements in mind.

 

One of the ways that portability breaks down is that public clouds encourage ad hoc development that doesn't necessarily comply with an organization's standards for applications run on-premise. This may be fine for prototyping or other work that is throwaway by design. However, it's far too easy for prototypes to evolve into something more—as often happened in the case of early visual programming languages—and the result is applications that either have to be rewritten or that may have support, reliability, or scalability issues down the road.

One approach to addressing this problem is to provide consistent runtimes across public and private clouds. Red Hat does this through its Certified Cloud Provider program that provides access to certified Red Hat Enterprise Linux (RHEL) on public clouds. (Pay-as-you-go RHEL is initially available on Amazon. Cloud Access provides a way to transfer on-premise RHEL subscriptions to Premier Certified Cloud Providers.) By running the same runtime across physical servers, multiple virtualization platforms, and public clouds, application certifications and testing need happen only once.

 

Finally, just because the "cloud" word is getting thrown around doesn't change the needs of either the IT department or the users when it comes to quality-of-service (QoS), security, or reliability. Users may fixate on the simplicity of consuming external computing resources but they expect the high level of availability that they're (hopefully) accustomed to IT providing. And, as an organization starts building a cloud, the goal needs to be to meet or exceed traditional IT operational benchmarks.

This requires no-compromise infrastructure. Cloud management may abstract this infrastructure and it may span multiple underlying technology stacks. But the capabilities of that underlying infrastructure still matter--more than ever. Dynamism and multi-tenancy (whereby disparate users share physical resources) are fundamental to clouds and they amplify any underlying infrastructure weakness. Red Hat has a long history of providing platform software for the most demanding IT environments. The cloud is simply the latest such.

Cloud computing operations requires blending the new and the old. Our expectations as consumers come to the fore. The "Consumerization of IT" phrase is often taken as synonymous with bringing your own iDevice to work. But it equally applies to user expectations of IT as shaped by Google and Facebook. Yet cloud computing doesn't suddenly void all legal, security, customer data, and uptime requirements. Those organizations that hit the right balance will be the most successful ones.

Portable computing creates scalable private clouds that can be federated to a public cloud provider under a unified management framework. Portable applications mean that developers can write once and deploy anywhere, thereby preserving their strategic flexibility and keeping their options open, while lowering maintenance and support costs. Portable services simplify development and operations by eliminating the need to re-implement frequently needed functions in private clouds and enable the movement of data and application features across clouds. Portable programming models let existing applications be brought over to cloud environments or evolved incrementally.

Friday, November 04, 2011

Three Approaches to Building a Cloud

In the course of preparing for the Red Hat Cloud Tour, I (along with many others on the Red Hat cloud team) gave a lot of thought as to how best to articulate the value delivered by cloud computing, what's needed for a private or hybrid cloud environment, why you'd want to build a cloud in the first place, and which approach delivers the greatest value.

Let's talk about that last point with the aid of a few slides from my keynote presentation from the Cloud Tour.

The typical IT operation can be thought of as consisting of a set of silos. Some of these silos may be the result of deliberate plan--perhaps to meet some regulatory need to keep internal businesses completely separate. However, more commonly, they come about through the accretion of new technologies, products, and organizations. All these silos create complexity. One goal of implementing a cloud should be to reduce this complexity.

How best to proceed?

This first approach essentially attempts to translate the "Greenfield" methodology used by service providers into an enterprise environment. The thinking is that throwing out existing infrastructure and replacing it with a grounds-up, homogenous, standardized computing foundation is a dramatic simplification relative to the typical enterprise IT infrastructure as it exists today.

In fact, this approach does dramatically simplify. It's also naively simple. For the vast majority of organizations, IT assets tarred with the pejorative "legacy" are also critical and core to the business. More broadly, IT infrastructures advance in an evolutionary way rather than through wholesale replacement. Doing so keeps both risk and cost down. Cloud computing is no different. While infrastructure standardization, modernization, and simplification are frequently good practices, they can usually only be taken so far.

Suppose, instead, we tackle just part of the problem. There are a couple of different ways to go about this.

We could, for example, decide to add some self-service and automation to a specific virtualization platform. This is VMware's approach to cloud. vCloud Director essentially just extends the vSphere virtualization platform and therefore requires that the underlying platform, whether in a private or public environment, be running a VMware technology stack. Alternatively, we could roll in a dedicated cloud appliance for some single purpose, such as a database.

Whichever of these two paths we take, the result is the same. Our IT infrastructure now has another silo. Hardly a reduction in complexity!

This is not to say that we can't start our journey to a cloud on a subset of infrastructure. In most cases, a pilot project or proof-of-concept using a subset of applications will indeed be the prudent path. The difference is that a proof-of-concept is a first step; a new silo is a dead end.

The final approach, and the one that Red Hat advocates, is to enable bringing the broadest set of IT assets under a cloud management framework. Certain existing--often static--workloads may be kept separate for a variety of reasons. But such decisions should come about because they make the most sense from an IT operational perspective, not because of restrictions imposed by a technology stack.

Supporting these capabilities requires a cloud management product that can span multiple virtualization platforms, a variety of public cloud providers based on a variety of underlying technologies, and even physical servers. While most clouds will have a virtualized foundation of some sort, we have spoken with a number of customers who require blending physical and virtual environments for different types of workloads or use cases.

The Red Hat product that makes this approach possible is CloudForms, which provides Infrastructure-as-a-Service management for private and hybrid clouds. It works across virtualization platforms such as vSphere and the KVM-based Red Hat Enterprise Virtualization and a variety of public clouds starting with Amazon. Its key interoperability component is the Deltacloud API, an incubator project under the governance of the Apache Software Foundation.

I've covered just a small part of our our Cloud Tour content. However, it's an important part because fundamental differences in approach to building clouds lead to fundamental differences in the business value that can be extracted from them.

Links for 11-04-2011

  • Apple's Supply-Chain Secret? Hoard Lasers - Businessweek - "“Operations expertise is as big an asset for Apple as product innovation or marketing,” says Mike Fawkes, the former supply-chain chief at Hewlett-Packard (HPQ) and now a venture capitalist with VantagePoint Capital Partners. “They’ve taken operational excellence to a level never seen before.”"
  • Can a Commercial Be Too Sexy For Its Own Good? Ask Axe - Martin Lindstrom - Business - The Atlantic - "However, the brand's early success soon began to backfire. The problem was, the ads had worked too well in persuading the Insecure Novices and Enthusiastic Novices to buy the product. Geeks and dorks everywhere were now buying Axe by the caseload, and it was hurting the brand's image. Eventually (in the United States, at least), to most high-school and college-age males, Axe had essentially become the brand for pathetic losers and, not surprisingly, sales took a huge hit. "
  • Tarmac delays traced to lack of buses to ferry fliers - USATODAY.com - Rt @flight_status10: Tarmac delays traced to lack of buses to ferry fliers << airports too dependent on jetbridges
  • Daring Fireball: The Type of Companies That Publish Future Concept Videos - "I’m not arguing that making concept videos directly leads to a lack of traction in the current market. I’m arguing that making concept videos is a sign of a company that has a lack of institutional focus on the present and near-present. Can you imagine a sports team in the midst of a present-day losing season that makes a video imagining a future championship 10 years out?" << Mostly agree.
  • How Groupon Was Founded - Nice, detailed rundown on groupon history.

How to Browse the Amazon Kindle Lending Library from a PC

One of my complaints about Amazon streaming for Prime Members is that they don't make it easy to search and browse only within the stuff you can get for free. This seems to be the case with their new Kindle Lending Library as well. Here's a recipe for browsing the list (currently at 5,379 results) from this Amazon discussion thread:

Follow these steps to browse books that are in the new Kindle lending Library on your PC. 
1) When on the front page of Amazon take a look at the search function near the top of the page. 
2) Don't put anything in the search box. Select "Books" in the department drop down box. Click on the "Go" button. 
3) You will now see pretty much all of Amazon's books. All 34+ Million of them. Select "Kindle Edition" in the Formats you see up at the top of the results list. 
4) Now you have all 1+ Million Kindle books. Over on the left side of the screen is further filters. Go all the way near the bottom of the long list of filters is a check box for "Prime Eligable". Click on it. 
5) There ya go. All 5,377 Kindle books that are in the new Prime Lending Library. This is all of them as that is the same number of books that I got when I was looking at them on my Kindle.

Apparently this only works if you're already a Prime member. It also appears as if you then need to use your Kindle device (the lending library only works with Kindle hardware--not the apps for devices like the iPad) to actually download the title.