Thursday, May 29, 2014

Podcast: Security, privacy, and home security with Gordon and Ellen


Red Hat's Gordon Haff and Elen Newlands talk security and privacy from the MIT Sloan CIO Symposium, the implications of privacy for IoT, whether Google could get into the home security business, and the mess that is security standards in cloud and elsewhere.

Technology and Culture at the MIT Sloan CIO Symposium 2014
Google and Nest may move into home security by buying out Dropcam

Listen to MP3 (0:30:11)
Listen to OGG (0:30:11)

Links for 05-29-2014

Wednesday, May 28, 2014

Yes, automation needs to be autonomous

Googleselfdriving

From John Markoff at The New York Times:

For the past four years, Google has been working on self-driving cars with a mechanism to return control of the steering wheel to the driver in case of emergency. But Google’s brightest minds now say they can’t make that handoff work anytime soon.

Their answer? Take the driver completely out of the driving.


I really want to give Google the benefit of the doubt here and assume that their engineers are smart enough not to have thought it was realistic for this sort of automated system to have a realtime manual backup. As I discussed a couple weeks back, "the handoff between manual (even if assisted) and autonomous needs to be clearly defined. Once you hand off control, you had better trust the autonomous system to do the right thing (within whatever margin of error you deem acceptable). You can’t wrest back control on the fly; it’s probably too late."

Tuesday, May 27, 2014

Links for 05-27-2014

Thursday, May 22, 2014

Technology and culture at the MIT Sloan CIO Symposium 2014

Sandy Pentland, MIT Media Lab

I learned a new buzzword at yesterday’s MIT Sloan CIO Symposium: "The Fog”—sort of Cloud + Internet of Things. Mercifully, that notwithstanding, the event was per usual an in-depth snapshot of not only up-and-coming technology trends (as one would expect at MIT) but also many of the related cultural and organizational issues. You can think of the event as being about the technological possibilities—but also about the constraints on those possibilities imposed by culture and other factors. 

The MIT Academic Panel is a good jumping off point. Moderated by Erik Brynjolfsson (co-author with Andrew McAfee of The Second Machine Age), it examined the idea that we are “now beginning to have technologies that augment the control system” (i.e. the human brain) in addition to the "physical power system" (i.e. human muscles). Brynjolfsson went on to state that “We are at the cusp on a 10 year period where we go from machines not really understanding us to being able to."

One example discussed by the panel was self-driving cars. John Leonard from MIT CSAIL and the Department of Mechanical Engineering said that he was “amazed by the progress of what’s happening out there,” likening autonomous driving systems to search for the physical world. At the same time—and here’s where the constraints come in—he also said that he had the “sense that we’re not quite there yet,” for example, to determine what might happen in a tricky driving situation. What’s “not quite there”? No real predictions. Leonard did say however that he only saw a 1 in 10 chance of a "really big [employment] transformation” which I took to mean a 1 in 10 chance of a what I like to call a robo-Uber (i.e. truly autonomous cars) in any near-term time horizon. Sloan prof Thomas Malone added that he would “be surprised to see general intelligence computers relative to people” in 30 to 40 years. 

In other words, strong AI—as opposed to things like IBM Watson that just appear intelligent—remains elusive. And it’s also unclear what limits that constraint puts in place.

The MIT Media Lab’s Sandy Pentland—decked out in vintage wearables—offered some other potential limits when he noted that the “rate of innovation in technology is much greater than the rate of change in government is much greater than the rate of change in culture. The NSA was a pretty well-governed organization—for the technology of the 1960s.” But, now, he went on to say “Everything is becoming data-fied.” And, while there’s always been a lot of slop in laws and how they’re enforced, that becomes more difficult when there’s potential telemetry and data everywhere. Automatic traffic tickets anyone?

As for passwords? They’re “useless” says Patrick Gilmore of the Markley Group. “If you’re not already using 2-factor authentication, you’re behind.” Nor was he a fan of password managers. Mind you, this is a somewhat enterprise-centric view of security. Tim Bray has argued for federated identity in a broader context. Which requires trusting someone and people generally aren’t very trusting these days. But it’s probably better than the password status quo in a lot of situations. Risk management and security—and their intersection with ever-increasing quantities of data—were also big topics throughout the day. Forrester Research’s Peter Burris, moderating a Leading the Digital Enterprise panel, opined that instead of saying we can protect everything we have, we have to think about what we can do about it afterwards—in addition to continue trying to stop attacks. Equinix’s Brian Lillie agreed, saying “You’re not going to stop everything; it’s a cornerstone of risk management.” And Raytheon’s Rebecca Rhoads spoke about the need to have sophisticated compartmentalization of information, driven by regulations and other factors.

Gilmore also suggested that people coming to his company—Markley’s a colocation provider—“mostly aren’t asking the right questions.” When dealing with cloud and other infrastructure providers, he argued that you should be looking in more depth than most people do. How long do you keep backups? How many versions? What type of physical security do you have? Do you degauss your hard drives when you retire them?

Mark Morrison of State Street also noted that you can’t outsource all of your security and have to think about how all of your security fits together—including all your point security products, your operational processes, and your external providers—and constantly evaluate. He also noted that there’s a “conundrum between privacy and information security—the level of monitoring and sophistication that lets you institute countermeasures."

Patrick Gilmore, Markley Group

Security and privacy aren’t the only things that play into data though. There’s also the pesky matter of physics. Lillie discussed hybrid cloud models in this context because “if you have enormous data sets, data gravity is happening. You need to find ways to connect clouds to private enterprises."

If I had to sum up my main takeaways from the day, they’d be something like the following. There’s the potential for many big changes related to computing power, to data, to computing ubiquity. We’re already starting to see some of the results. But some technological distances that seem small aren’t. (Think reliable speech recognition.) And, even more importantly, culture, laws, ethics,  and economics all matter. Which is one reasons that CIOs increasingly have to work closely with business owners to deliver on technology promises rather than focusing on the technology alone.

Wednesday, May 14, 2014

Links for 05-14-2014

Friday, May 09, 2014

Links for 05-08-2014

Wednesday, May 07, 2014

Links for 05-07-2014

Tuesday, May 06, 2014

Links for 05-06-2014

Smart crowds, Irrational individuals?

This is from a presentation/discussion from Boston ProductCamp in May 2014. Here's the abstract: We've all made rational decisions and forecasts based on individually analyzing the best available data. But there are many other aspects of decision making. This session will examine some of those. When can groups of non-expert individuals beat some of the best experts? What are some of the common biases that cause ordinary people to make decisions differently from those that they "should" make. Can you take advantage of the ways other makes decisions or is this unwarranted manipulation?

Monday, May 05, 2014

Automation and autonomy

Bmw spartanburg plant 12

I’ve been thinking and reading about autonomous systems of late—both autonomous IT systems and autonomous systems of other types such as vehicles. I also read a lot of misconceptions about automation—whether it’s in the arguments against or in misunderstanding what automation really means. I’ll be writing further on the topic but here are five points to get started. Comments welcome.

Computers are good at things that can be automated

Back in my earlier life at Data General, we were selling some of the earlier symmetrical multiprocessor (SMP) servers to large enterprises, including Wall Street. SMP introduced a new wrinkle. Where to place individual processes so that the system as a whole, with its multiple processors, ran most efficiently. One approach was to manually place them—which is precisely what a number of our big customers wanted to do; we even wrote and sold them class software to help them do so. But know what? The operating system scheduler could actually do this job pretty well in the aggregate, as all these customers eventually recognized.

There are legitimate questions about what tasks can be readily handled by computers and which can’t. With respect to self-driving cars specifically, computer AI interacts with the physical world much differently from a human. It’s fair to say that computers will be able to do many things much better than can even a good driver while handling other situations will prove very difficult to solve. With datacenter computing though, it’s clear than many tasks have to be eventually automated and exceptions should be relatively rare.

Assistance can precede automation

Yet, even when complete automation isn’t (yet) achievable, it can still be used to significantly offload how many activitie people need to do. We’re already seeing this in automobiles with technologies like adaptive cruise control, which can adjust a car’s speed to maintain a safe distance from any vehicles ahead. Such systems are mostly in luxury cars today but I expect they’ll become both more widespread and more sophisticated. And judiciously applied assistive systems can be rolled out far more incrementally than anything taking over full control.

The same is true with cloud computing. One example that I like to use is around the idea of cloudbursting—typically used to mean the dynamic movement of workloads from private to public clouds in response to an increase in demand. As I’ve written previously, this strong form of cloudbursting—much less the idea of workload movement in response to changes in public cloud spot pricing—gets into a lot of complications. However, hybrid cloud management software and operating systems that can run in different environments make it possible to move applications around as needed (e.g. to switch cloud vendors) even if the process isn’t necessarily completely autonomous and hands-off. 

Automation isn’t all or nothing

Even when hands-off automation works well and is appropriate for some tasks, it may not be used—or may be used under a more rigorous set of controls—elsewhere. With respect to self-driving cars, I can easily imagine an interim stage where they can drive autonomously on designated sections of limited access highways—and not elsewhere. For anyone who commutes on the highway or does long Interstate drives, this should be an obvious win even if its not the nirvana of a robo-Uber.

Similarly, while “automate more” should be IT’s mantra, most companies aren’t starting from scratch. It won’t always make as much sense to aggressively automate stable legacy systems as it will to automate through a new OpenStack infrastructure that’s running primarily new cloud-enabled workloads. Standardizing and automating are effective at cutting costs and reducing errors just about everywhere—but the bang for the buck will be bigger in some places than others.  

But autonomy requires a defined control handoff

The above said, the handoff between manual (even if assisted) and autonomous needs to be clearly defined. Once you hand off control, you had better trust the autonomous system to do the right thing (within whatever margin of error you deem acceptable). You can’t wrest back control on the fly; it’s probably too late.

In so many autonomous car discussions, I hear statements to the effect of: “If there’s an emergency, the driver can just take over.” Well, actually he can’t. He’s playing a game on his iPad and he probably needs a good 30 seconds to evaluate the situation and take any corrective action. OK for some situations, not for others. If the car’s in control, it has to deal with things itself—at least anything urgent.

With complex distributed IT systems, as increasingly characterize cloud environments, it’s certainly important to understand what’s going on. But events happen and cascade at incredibly short time scales by human standards. Check out this presentation by Adrian Cockroft of Battery Ventures in which he talks about some of the challenges associated with monitoring of large-scale architectures.   

Autonomy can require new approaches/workflows

Finally, the best way to automate is likely not to just automate the old thing, certainly not if the old thing is a mess. A clean sheet approach may be constrained by coexisting with what’s already in place to be sure. The infrastructure that we’d build for 100% self-driving cars is much different than what we would build (and have built) for a 100% human one. However, even given a mixed environment, I suspect that over time we’ll add some infrastructure to help autonomous cars do things that they’d have trouble doing otherwise. 

In the case of IT, we’re seeing new classes of tools oriented to large-scale cloud workloads and DevOps processes. One big thing about these tools from those of the past is that they’re mostly open source. Donnie Berkholz of RedMonk discusses some of them in OpenDevOps: Transparency and open source in the modern era. These include configuration management like Puppet and Chef as well as monitoring and analysis tools like Nagios and Splunk. DevOps itself, whatever your precise definition, is very much tied into the idea that much of the manual, routine ops work of the traditional system admin is increasingly automated. This is the only thing enabling a developer to take over so many ops tasks.  

Automation done right is a huge positive. But we need to understand what it is, how to use it, and how to interact with it. 

[Photo credit: BMW. BMW Spartansburg SC assembly plant.]

 

 

Links for 05-05-2014

Thursday, May 01, 2014

Podcast: Autonomous vehicles, passwords, and IoT with Gordon and Ellen


In the first episode of a new Cloudy Chat feature, I sit down for a free-wheeling discussion with one of my Red Hat colleagues. Today, my co-host is Ellen Newlands who is the product manager for identity management at Red Hat. We start with self-driving cars and other autonomous vehicles and move onto the Internet-of-Things against a background of security and privacy implications in all of this.

A few links to go with the podcast:

Google self-driving cars
FreeOTP
Federated Identity, Tim Bray
McKinsey article on the Internet of Things

Listen to MP3 (0:31:38)
Listen to OGG (0:31:38)