Thursday, May 28, 2009

After the Cloud: Epilogue


After the Cloud:
  1. Prelude
  2. So Far
  3. The New Big
  4. To Atomic Computation and Beyond
  5. Open Heaps
  6. Heaps of Cash
  7. Epilogue

Though wildly exciting to imagine a future of computing where ubiquitous devices are more deeply integrated into the infrastructure we use to power our applications, the real purpose of these posts has been to explore possibilities.

Let's have crazy thoughts. Let's build upon them, imagining ways in which they could become a reality. Let's not only munch on the regular diet of the technical "now"; let's plant seeds in many experiments for the future.

This post is about thinking of small, mobile devices and cloud computing. But it's also a rough template. Let's do the same thing for a the desktop: how might it evolve? Where will users be spending their time? Let's do it for the OS and the kernel: what radical changes can we envision there? The technology behind health care. Education. Our new patterns of behaviour in a constantly changing world. All of these and more deserve our attention.

The more we discuss such topics in a public forum, the more thought will be given to them. Such increased awareness and attention might spark the light of innovation years ahead of time, and do so in the context of an open exchange of ideas. Let's have Moore's law for the improved quality of life with regard to technology; let's take it out of the chip and into our lives.



Friday, May 22, 2009

Canonical's Vision


Canonical's most recent AllHands meeting finished last night (this morning, really... I can't believe I got up at 7:30am), and I'm somewhat at a loss for words. In a good way.

But I'll try anyway :-)

As someone who was highly skeptical of the validity of Canonical's business model prior to working here, I can say that not only do I not doubt our ability to be a hugely successful company, but I am deeply committed to that success. Before AllHands, Canonical had earned my respect and loyalty through the consistent support and care of its employees. After AllHands, I have a much greater practical, hands-on understanding of Canonical's strategies and the various projects involved in creating a reality of success.

What's more, though, is the completeness of my belief in the people and the vision. This is thanks to the massive exposure we've had during AllHands to the collective vision; the team projects; all the individuals with amazing histories, skills, unbelievable talent and ability to deliver; and most of all, the dedication that each employee of Canonical has to truly making the world a better place for anyone who depends upon technology.

Ubuntu is free, and that's great. But Canonical needs to be a huge commercial success if its free OS distribution is going to have the power to transform the market and thus people's lives. This AllHands has given me a complete picture of how that will happen: we're all working on a different part of this puzzle, and we're all making it happen.

Success in the marketplace is crucial. Not because of greed or the lust for power, but because we live in a world where value is exchanged. As part of that ecosystem, we want to bring the greatest value to the people. This is not "selling out"; it's selling. This does not give away a user's freedom; it helps guarantee its continued safety in a competitive, capitalist society.

If we want anyone to embrace Ubuntu instead of a non-free OS -- without asking our users to sacrifice anything -- we're going to need to make very serious changes in design, usability, integration, and stability. To do this in a clean, unified manner really only comes with a significant investment of time, direction, and capital. Due to the seriousness of Canonical's altruistic vision, as we generate this capital, we're making the dream come true for the world.

At AllHands, I've seen designs that will seriously challenge Apple. I've seen a usability team's plans for true computing goodness. I've seen revenue models that have made my jaw drop. I've seen glimpses of the bright future.

And baby, it's exciting as hell.


Monday, May 18, 2009

After the Cloud: Heaps of Cash


After the Cloud:
  1. Prelude
  2. So Far
  3. The New Big
  4. To Atomic Computation and Beyond
  5. Open Heaps
  6. Heaps of Cash
  7. Epilogue

A One-Two Punch

Let me give you the punchline first, this time. Imagine a service that:
  1. Combines the cloud management features of EC2 for any system (in this case, mobile devices), the monitoring and update management of Canonical's Landscape, the buying/selling power of an e-commerce application, the auctioning capabilities of eBay, and the data of marketing campaigns.
  2. Seamlessly integrates all this into a cloud (Open Heap!) provisioning/acquisition system or as part of your mobile provider's billing and information web pages.
So where does the cash come in?


Power to the People

As usual, revenue would depend upon market adoption. That would depend upon appeal (addressed with marketing), usefulness (addressed by software engineering and usability), and viability. That last one's particularly interesting, as it's where people and the cash intersect.

A product suite and service, all built around open heaps, could have a long and fruitful life if implemented with the end user in mind. Users would have the opportunity to become partners in an extraordinary way: they would be consuming a service, while at the same time, being given the opportunity to resell a portion of that service for use in the cloud-like architectures of open heaps.

The first company that does this really well would have a continuously growing following of users. This company would be helping consumers earn immediate cash back on their property. This is something I believe deeply in; it's a positive manifestation of the continuing evolution of the consumer's role in the market. I'm convinced that the more symbiotic a relationship between consumer and producer, the healthier an economy will be.


Providers

In an open heap scenario, there are two providers: the mobile phone provider and the heap provider. Phone companies get to make money from the deal passively: through a partnership that provides them with a certain percentage of the revenue or indirectly through heap-related network use.

The heap provider (e.g., someone like Amazon or RightScale), would stand to make the most of everyone. Even though they wouldn't own the devices themselves (in contrast to current cloud providers), they would be able to assess fees on various transactions and for related services.

Imagine application developers "renting" potential CPU, memory, storage and bandwitdth from a heap that included 100s of 1000s of mobile users. The heap provider would be the trusted third party between the device owner and the application developer. In this way, the provider acts like an escrow service and can asses feeds accordingly.

Imagine a dynamic sub-market that arises out of this sort of provisioning: with millions of devices to choose from, a user is going to want to make theirs more appealing than 1000s of others. Enter auctions. Look at how much money eBay makes. Look at the fractional fees that they asses... fees which have earned them billions of dollars.

Throw in value-adds like monitoring and specialized management features, and you've got additional sources of revenue. There's a lot of potential in something like this...


Review
Obviously, all of this is little more than creative musing. The technology isn't quite there yet for a lot of what is required to make this a reality. Regardless, given the shere numbers of small, networkable devices in our society, we need to explore how to best exploit untapped resources in mobile computing, providing additional, cheaper environments for small applications, decreasing the dependency we have upon large data centers, and hopefully reducing the draw on power grids. We need decentralized, secure storage and processing. We need smarter, more fair, consumer-as-beneficiary, economies.

Next, we develope a new segment of the market, where any user or company with one or more networked devices would be able to log in to an open heap provider's software and offer their machine as another member in that cloud. There's a lot of work involved in making that happen, much of it focused on the design and implementation of really good software.

If we can accomplish all that, we will have reinvented the cloud as something far greater and more flexible than it is today.


Thursday, May 14, 2009

After the Cloud: Open Heaps


After the Cloud:
  1. Prelude
  2. So Far
  3. The New Big
  4. To Atomic Computation and Beyond
  5. Open Heaps
  6. Heaps of Cash
  7. Epilogue

Refresher

Up to now, we've considered technical explorations and possible related future directions for the technology surrounding the support of distributed applications and infrastructure. This post takes a break and returns to thoughts of provisioning resources on small devices such as mobile phones. As stated in To Atomic Computation and Beyond:
This could be just the platform for running small processes in a distributed environment. And making it a reality could prove to be quite lucrative. A forthcoming blog post will explore more about the possibilities involved with phone clouds...
But first, I'm so tired of the term "cloud," so I did some free-association... from cloud to clouds to "tons of little clouds" to "close to the ground" to cumulus to heap (Latin for "cumulus"). Heap! It's irresistible :-)

"Open" is such a terribly abused word these days (more so than cloud), but using it as an adjective for a wild collection of ad-hoc, virtualized process spaces satisfies some subtle sense of humor. Open Heaps it is.


Starting Points

Let's think about the medium in our example: cellular telephony. Is there a potential market here? Here are some raw numbers from Wikipedia:
By November 2007, the total number of mobile phone subscriptions in the world had reached 3.3 billion, or half of the human population (although some users have multiple subscriptions, or inactive subscriptions), which also makes the mobile phone the most widely spread technology and the most common electronic device in the world.
I think we can count that as a tentative "yes."

Can we do this the easy way and just use TCP/IP? In other words, what about using WiFi phones or dual-mode mobile phones as the communication medium for devices in our open heaps? Well, that would certainly make many things much easier, since everything would stay in the TCP/IP universe. However, the market penetration of standard mobile phones is so much greater in comparison.

That being said, how many currently operating phones are capable of serving content on the internet, running background processes, etc.? Maybe only a small fraction, perhaps even enough to justify supporting only devices such as handhelds, smartphones, MIDs, UMPCs, and Netbooks.

Two possibilities for ventures here might be:
  1. A startup that developed an Open Heap offering for any Internet-connected device.
  2. A company that formed a partnership with one or more mobile carriers, acting as a bridge between the carrier-controlled network/device-management capabilities and the Internet.

The Business Problem

So, let's say we've got the technology ready to go that will allow users to upload a process hypervisor to their phones, and that this technology provides the ability for users to allot process resources (e.g., RAM, CPU, storage). There are still a couple basic principles to address to justify a business in this area:
  1. How will people be better off with this than without it?
  2. How will this technology generate revenue?
In general, I believe that consumers are always better off with more choices. I also believe that balanced systems run better than those that are rigged to benefit just one group. As such, I have an idealist's interest in things like Open Heaps, as they will empower interested consumers to earn revenue (however small) on their own property (mobile phones and other devices with marketable resources). What's more, if there are billions of devices available as nodes in Open Heaps, and there is a computing demand for those resources, then there will inevitably be competitors aiming to capitalize on them. Generally speaking, I also believe that increased competition provides a better chance for improved quality of service.

Conversely, imagine that Open Heaps don't happen, that the idle resources of mobile devices (or any other eligible equipment) either remain untapped of their potential or, worse, are put to use by corporations that only desire the end consumer to have limited power over their own property and how it's used. Dire scenarios aren't difficult to imagine, thanks to various examples of anti-consumer behaviour we've seen from large corporations and special interest organizations in the recent past.

So, yes -- I think we can make a case for this being of benefit to consumers (and thus a marketer's dream!). The more prevalent mobile devices become, the more they will integrate into our daily lives... and the more important it will be that these are managed as the rightful property of the consumer, people that have the right to rent or lease and profit from their property as they see fit.

But, how could this generate revenue?

Next up: Gimme da cash!



Thursday, April 23, 2009

Generators and Coroutines


This came up in the blog comments yesterday, but it really deserves a post of its own. A few days back, I was googling for code and articles that have been written regarding the use of Python 2.5 generators to build coroutines. All I got were many dead ends. Nothing really seemed to have any substance nor did materials dive into the depths in the manner I wanted and was hoping to find. I was at a loss... until I came across a blog post by Jeremy Hylton.

This was heaven-sent. Until then, I'd never looked at David Beazley's instruction materials, but immediately was dumbstruck at the beautiful directness, clarity, and simplicity of his style. He was both lucid in the topics while conveying great enthusiasm for them. With Python 2.5 generator-based coroutines, this is something that few have been able to do a fraction as well as David has. I cannot recommend the content of these two presentations highly enough:
Even though I've never been to one of his classes, after reading his fascinating background, I'd love the chance to pick his brain... for about a year or two (class or no!).

I've done a little bit of prototyping using greenlets before, and will soon need to do much more than that with Python 2.5 generators. My constant companion in this work will be David's Curious Course. Also, don't give the other slides a pass, simply because you already understand generators. David's not just about conveying information: he's gifted at sharing and shifting perspective.



Wednesday, April 22, 2009

Functional Programming in Python


Over the past couple years or so I've toyed with functional programming, dabbling in Lisp, Scheme, Erlang, and most recently, Haskell. I've really enjoyed the little bit I've done and have broadened my experience and understanding in the process.

Curious as to what folks have done with Python and functional programming, I recently did a google search I should have run years ago and discovered some community classics. I'm posting them here, in the event that I might spare others such an error in oversight :-)
I've always enjoyed David's writing style, though I've never read his FP articles until now. They were quite enjoyable and have aged well, despite referencing older versions of Python. Andrew's HOWTO provides a wonderful, modern summary.

I make fairly regular use of itertools but have never used the operator module -- though I now look forward to some FP idiomatic Python playtime with it :-) I've never used functools, either.

Enjoy!



Tuesday, April 21, 2009

After the Cloud: To Atomic Computation and Beyond


After the Cloud:
  1. Prelude
  2. So Far
  3. The New Big
  4. To Atomic Computation and Beyond
  5. Open Heaps
  6. Heaps of Cash
  7. Epilogue

To restate the problem: we've got cloud for systems and we've got cloud for a large number of applications. We don't have cloud for processes (e.g., custom, light-weight applications/long-running daemons).

Personally, I don't want a whole virtual machine to myself, I just need a tiny process space for my daemon. When my daemon starts getting slammed, I want new instances of it started in a cloud (and then killed when they're not needed).

What's more, over time, I want to be writing my daemon better and better... using less of everything (memory, CPU, disk) in subsequent iterations. I want this process cloud to be able to handle potentially significant changes in my software.

Dream Cloud

So, after all that stumbling around, thinking about servers in the data center as the horsepower behind distributed services, and then user PCs/laptops as a more power-friendly alternative, the obvious hit me: phones. They are almost ubiquitous. People leave them on, plugged in, and only use them for a fraction of that time. What if we were able to construct a cloud from cell phones? Hell, let's throw in Laptops and netbooks, too. And Xboxes, Wii, and TiVos. Theoretically, anything that could support (or be hacked to support) a virtual process space could become part of this cloud.

This could be just the platform for running small processes in a distributed environment. And making it a reality could prove to be quite lucrative. A forthcoming blog post will explore more about the possibilities involved with phone clouds... but for now, let's push things even further.

When I mentioned this idea to Chris Armstrong at the Ubuntu Developer Conference last December, he immediately asked me if I'd read Charles Stross' book Halting State. I had started it, but hadn't gotten to the part about the phones. A portion of Stross' future vision in that book dealt with the ability of users to legally run programs of other's phones. I really enjoyed the tale, but afterwards I was ready to explore other possibilities.

Horse-buggy Virtualization


So I sat down and pondered other possibilities over the course of several weeks. I kept trying to think like business visionaries, given a new resource to exploit. But finally I stopped that and tried just imagining the possibilities based on examples computing and business history.

What's the natural thing for businesses to do when someone invents something or improves something? Put new improvements to old uses, potentially reinventing old markets in the process. That's just the sort of thing that could happen with the cloudification of mobile devices.

For examples, imagine this:
  • Phone cloud becomes a reality.
  • Someone in a garage in Silicon Valley buys a bunch of cheap phones, gumstix, or other small ARM components, rips off the cases, and sells them in rack-mountable enclosures.
  • Data centers start supplementing their old hardware offering with this new one that lets them use phone cloud tech (originally built for remote, hand-held devices) to sell tiny fractions of resources to users (on new, consolidated hardware... like having hundreds of phone uses in a single room with full bars, 24/7).
  • With the changing hardware and continuing improvements in virtualization software, more abstraction takes place.
  • Virtualization slowly goes from tool to prima materia, allowing designers not to focus on old-style, horse-drawn "machines" like your grandpa used to rack, but rather abstract process spaces that provide just what is needed, for example, to enable a daemon to run.
Once you've gotten that far, you're just inches from producing a meta operating system: process spaces (and other abstracted bits) can be built up to form a traditional user space. Or they can be used to build something entirely different and new. The computing universe suddenly gets a lot more flexible and dynamic.

Democritus Meets Modern Software

So, let's say that my dream comes true: I can now push all my tiny apps into a cloud service and turn off the big machines I've got colocated throughout the US. But once this is in place, how can we improve our applications to take even better advantage of such a system, one so capable of massively distributing our running code?

This leads us to an almost metaphysical software engineering question: how small can you divide an application until you reach the limits of functionality, where any further division would be senseless bytes and syntax errors? In terms of running processes, what is your code atom?

Prior to a few years ago, the most common answer would likely have been "my script" or "my application". Unless, of course, you asked a Scheme programmer. Programming languages like Scheme, Haskell, and Erlang are finding rapidly increasing acceptance as solutions for distributed programming problems because functional programming languages lend themselves easily to the problem of concurrency and parallelism.

If we had a massive computing cloud (atmosphere, more likely!) where we could run code in virtual process spaces, we could theoretically go even further than running a daemon: we could split our daemon up into async functions. These distributed functions could be available as continuously running microthreads/greenlets/whatever. They could accept an input and produce an output. Composing distributed functions could result in a program. Programs could change, failover, improve, etc., just by adding or removing distributed functions or by changing their order.

From Atoms to Dynamic Programs

Once we've broken down our programs into distributed functions and have broken our concept of an "Operating System" down into virtual process spaces, we can start building a whole new world of software:
  • Software becomes very dynamic, very distributed.
  • The particulars of hardware become irrelevant (it just needs to be present, somewhere).
  • We see an even more marked correlation between power consumption and code, where functions themselves could be measured in joules consumed per second.
  • Just for fun, let's throw in dynamic selection of fuctions or even genetic algorithms, and we have ourselves one of the core branches of the predicted Ultra-large Scale Systems :-)
I mention this not for cheap thrills, but rather because of the importance of having a vision. Even if we don't get to where we think we're going, by looking ahead and forward, we have the opportunity to influence our journey such that we increase the chances of getting to a place equal to or better than where we'd originally intended.

From a more practical perspective: today, I'm concerned about running daemons in the cloud. Tomorrow I could very well be concerned about finer granularity than that. Why not explore the potential results of such technology? Yes, it my prove infeasible now; but even still, it could render insights... and maybe more.

A Parting Message

Before I wind this blog post down, I'd like to paste a couple really excellent quotes. They are good not so much for their immediate content, but for the pregnant potentials they contain; for the directions they can point our musings... and engineerings. These are two similar thoughts about messaging from two radically different contexts. I leave you with these moments of Zen:

On the Erlang mail list, four years ago, Erlang creator Joe Armstrong posted this:
In Concurrency Oriented (CO) programming you concentrate on the concurrency and the messages between the processes. There is no sharing of data.

[A program] should be thought of thousands of little black boxes all doing things in parallel - these black boxes can send and receive messages. Black boxes can detect errors in other black boxes - that's all.
...
Erlang uses a simple functional language inside the [black boxes] - this is not particularly interesting - *any* language that does the job would do - the important bit is the concurrency.
On the Squeak mail list in 1998, Alan Kay had this to say:
...Smalltalk is not only NOT its syntax or the class library, it is not even about classes. I'm sorry that I long ago coined the term "objects" for this topic because it gets many people to focus on the lesser idea.

The big idea is "messaging" -- that is what the kernal of Smalltalk/Squeak is all about... The key in making great and growable systems is much more to design how its modules communicate rather than what their internal properties and behaviors should be. Think of the internet -- to live, it (a) has to allow many different kinds of ideas and realizations that are beyond any single standard and (b) to allow varying degrees of safe interoperability between these ideas.

If you focus on just messaging -- and realize that a good metasystem can late bind the various 2nd level architectures used in objects -- then much of the language-, UI-, and OS based discussions on this thread are really quite moot.

Resources

Next up: The Business of Computing Atmospheres



Monday, April 20, 2009

After the Cloud: The New Big


After the Cloud:
  1. Prelude
  2. So Far
  3. The New Big
  4. To Atomic Computation and Beyond
  5. Open Heaps
  6. Heaps of Cash
  7. Epilogue

Intermission

I've made a few hints so far about what cloud service I'd like to see come into being, and at the end of this post, we'll get closer to discussing that. Hang in there: the post after this one will describe that in more detail. Then, after that, there will be at least one post which will take a peek at some of the many business opportunities that could come from this.

A Passing Comment

At PyCon 2006 in Dallas, TX, an after-hours event was held in a local bookstore. At one point during that evening, Itamar, Moshe and I got into a discussion about miniaturization and Moshe went off on a hilarious rant that Itamar and I just sat back and enjoyed. His whole tirade was based on the beauty and perfection of gumstix. This was the first I'd heard of them; I had no idea a product like that was on the market, and it hit me like a ton of bricks.

For the next day or so all I could think about was buying a boxload of gumstix computers and doing something with them -- anything! And not just because they were the coolest toys ever, but because there was something about them that I could just feel was a part of the future of computing (see my 2004 post on Dinosaurs and Mammals). It seemed that these miniture devices could help prototype what was destined to be one of the most exciting fields in the coming years for both systems and application engineers.

Sadly, I never did get that box :-) But I neither did I stop thinking about them. Confronted with the problem of small distributed services sitting on big, barely-used iron, gumstix haunted my musings.

Tiny Apps in the Cloud?

When at Divmod, one of the strategies that Glyph and I were working on concerned Twisted adoption in web hosting and cloud environments. The differences between CGI and Twisted applications are magnified when one considers a cloud environment like Mosso and one that would suitably support Twisted design principals. I spent a lot of time pondering the ramifcations of that one, let me tell you. A potential merger permanently postponed those business possibilities, but a nice side benefit was the forking of Python Director into a pure-Twisted conversion, txLoadBalancer (with the beginnings of native, in-app load-balancing support).

Thoughts of adjusting tiny apps to be able to run on big cloud hardware still grated, though. It felt dangerously close to pounding round pegs into square holes. What I really wanted was something closer to the future hinted at by Ultra Large-Scale Systems research: massively distributed, fault-tolerant services running on everything :-) Until then, though, I would have been satisfied with tiny apps on tiny hardware, consuming only the resources they need in order to provide the service they were designed for.

This brought up ideas of distributed storage, memory, and processing as well as the need for redundacy and failover. But tiny. All I could see was tiny hardware, tiny apps, tiny protocols, tiny power consumption. For me, tiny was big. The easiest "tiny" problem to address with small devices was storage. And I already knew the guys that were working on the problem.

Distributed Storage Done Right

There's an odd, rather abstract parallel between EC2 and Tahoe (a secure, decentralized, fault-tolerant filesystem). EC2 arose in part from a corporation acting out of its best interests: turn a liability into an asset. For Tahoe, the "body" in question isn't a corporation, but rather a community. And the commodity is not bottom lines, but rather data owned and treasured by members of a data-consuming community.

Here's a quick description of Tahoe from a 2008 paper:
Tahoe is a storage grid designed to provide secure, long-term storage, such as for backup applications. It consists of userspace processes running on commodity PC hardware and communicating with [other Tahoe nodes] over TCP/IP.
Tahoe is written in Python using Twisted and a capabilities system inspired by those defined by E. But what does this mean to a user? It means that anyone can setup and run a storage grid on their personal computers. All data is encrypted and redundant, so you don't need to trust members of the community (your data grid), you just need to set aside some disk space on your machines for them.

In a message to the Tahoe mail list, I responded to an associate who was exploring Tahoe for in-memory use by Python mapreduce applications. I wanted in-memory distributed storage for a different use case (tiny apps on tiny devices!) but our interests were similar. It turned out one of the primary Tahoe developers was working on related code; something that could be used as the basis for future support for distributed, solid-state devices.

Here's some nice dessert: Twisted coder David Reid was reported to have gotten Tahoe runnig on his iPhone. Now we're talking ;-) (Update: David has informed me that Allmydata has a Tahoe client that runs on his iPhone).

Processing in the Right Direction

But what about the CPU? Running daemons? Can we do something similar with processing power? If a whole virtual machine is too much for users, can we get a virtual processing space? I want to be able to run my process (e.g., a Twisted daemon) on someone else's machine, but in such a way that they feel perfectly safe running it. I want Tahoe for processes :-)

As part of some recent experiments in setting up a virtual lab of running gumstix ARM images, I needed to be able to connect mutliple gumstix instances in a virtual network for testing purposes. In a search for such a solution, I discovered VDE. Then, unexpectedly, I ran across a couple fascinating wiki pages on the site of related super-project Virtual Square Networking. Their domain is currently not resolving for me, so I can't pull the exact text, but here's a blurb from a sister project on SourceForge:
View OS is a user configurable, modular process virtual machine, or system call hypervisor. For each process the user is able to define a "view of the world" in terms of file system, networking, devices, permissions, users, time and so on.
Man, that's so close, I can almost taste it!

Where is all this techno-rambling going? Well, I'm sure some of you have long since guessed by now :-) Regardless, I will save that for the next post.

Oh, and yes: tiny is the new big.

Next Up:
A Passing Message



Sunday, April 19, 2009

After the Cloud: So Far


After the Cloud:
  1. Prelude
  2. So Far
  3. The New Big
  4. To Atomic Computation and Beyond
  5. Open Heaps
  6. Heaps of Cash
  7. Epilogue

Systems Engineering in a Box

The recent redefinition of "the cloud" as a service and commodity is a brilliant bit of frugal resource management (making use of idle resources in an expensive data center) coupled with flawless marketing. Yes, from a business perspective, that's an amazing coup. But it's the 30,000 foot technical perspective that really impresses me:

In the same way that software frameworks, their libraries, and best practices have, through the trials of last 40 tears, productized application engineering, the cloud has started to experience something similar. What everyone is now calling the cloud is really the productization of systems engineering.

Systems engineering (and the management of related resources) has proven to be an expensive, time-consuming endeavor best left to the experts. Sadly, those that need it are often in the unenviable position of having to determine who the experts are without having the proper background to do so effectively. When the planning, building, and management of large systems works well, it's a labor of sweat and blood. When it doesn't, it's the same thing, with a nightmare tinge about the whole thing coupled with an odd time-dilation effect.

It seems that in applicable circumstances, some businesses are spared that nightmare by using a cloud service or product.

Bionic CGI

As someone with a long history and interest in application development, I was particularly keen on Google App Engine when it came out. This was a different take on the cloud, one that Mosso also seems to be embracing: upload an application that is capable of having it's data access and views distributed/load balanced across multiple systems (virtual or otherwise).

This is essentially CGI's grandchild. You have an application that needs to be started up by any number of machines in response to demand. A CGI app in Mosso will probably need very few (if any) adjustments required in order to run "in the cloud." Google is a special case, since developers are using custom, black-box infrastructure built by Google (for insights into this, check out these papers), but I'd be willing to bet someone lunch that there is room for a CGI analogy at some level of Google App Engine.I guess with Google, we kind of have both application and systems engineering in a box, in so far as the systems support your application.

At any rate, it's CGI better than it was before. Better, stronger, faster.

The Rub

However fascinating these cloud offerings may be, I find myself not getting what I need. As a developer of Twisted applications, I'm interested in small apps. Hell, I don't even like running databases and full-blown web servers. A while ago, I spent a couple years working on some Twisted-based application components that could be run as independent services (thus load-balanceable) and completely replace the standard web server + database + lots of code routine for application development.

So what about developers out there like me, who want to run tiny apps? We don't need "classic" web hosting, nor CGI in the cloud, nor cloud-virutualized versions of large machines.

As a segment of the population, business consideration for developers such as myself might seem like a waste of time. But before dismissing us, consider this:
  1. Exploring small niche's like this one often lead to interesting revelations.
  2. Market segments that have proven quite vibrant may be able to expand into even greater territories (e.g., the iPhone apps phenomena).
Next up: Tiny > *



Saturday, April 18, 2009

After the Cloud: Prelude


After the Cloud:
  1. Prelude
  2. So Far
  3. The New Big
  4. To Atomic Computation and Beyond
  5. Open Heaps
  6. Heaps of Cash
  7. Epilogue

These days, it seems that no matter where we go, we hear something about "the cloud." It's not really buzz anymore... it has become far more accepted and widely discussed to be that. For some organizations it's actually part of their current, every-day infrastructure. For others, it soon will be. As far as I'm concerned, now's the perfect time to start discussing what's next :-)

If you've spent any time reading some of the blog content I've managed to post over the past several years, you've probably noted that I like to explore the long view (if rather informally). Well, that's what I've got in store for you now: a series of blog posts that explore the long view of a post-cloud industry. Hopefully, with some new twists and turns along the way.

First off, I want to cover some basic ground, so the first couple posts might be a little less interesting that those that follow. Fortunately, I've been pondering these particular ideas since my month sabbatical last August -- this means I've already got most of the material written and ready to go!

These posts are going to take a peek at practical, hands-on ideas regarding the ways in which one might make use of current nascent tech to build prototypes for tomorrow's infrastructure, what that infrastructure might be, business ideas about what do do with that tech, and even future possibilities for information-based markets.

Hope you enjoy it as much as I've enjoyed thinking about it :-)