Saturday, March 24, 2012

A Conversation with Guido about Callbacks

In a previous post, I promised to share some of my PyCon conversations from this year -- this is the first in that series :-)

As I'm sure many folks noticed, during Guido van Rossum's keynote address at PyCon 2012, he mentioned that he likes the way that gevent presents asynchronous usage to developers taking advantage of that framework.

What's more, though, is that he said he's not a fan of anything that requires him to write a callback (at which point, I shed a tear). He continued with: "Whenever I see I callback, I know that I'm going to get it wrong. So I like other approaches."

As a great lover of the callback approach, I didn't quite know how to take this, even after pondering it for a while. But it really intrigued me that he didn't have the confidence in being able to get it right. This is Guido we're talking about, so there was definitely more to this than met the eye.

As such, when I saw Guido in the hall at the sprints, I took that opportunity to ask him about this. He was quite generous with his time and experiences, and was very patient as I scribbled some notes. His perspective is a valuable one, and gave me lots of food for thought throughout the sprints and well into this week. I've spent that intervening time reflecting on callbacks, why I like them, how I use them, as well as the in-line style of eventlet and gevent [1].


The Conversation

I only asked a few initial questions, and Guido was off to the races. I wanted to listen more than write, so what I'm sharing is a condensed (and hopefully correct!) version of what he said.

The essence is this: Guido developed an aesthetic for reading a series of if statements that represented async operations, as this helped him see -- at a glance -- what the overall logical flow was for that block of code. When he used the callback style, logic was distributed across a series of callback functions -- not something that one can see at a glance.

However, more than the ability to perceive the intent of what was written with a glance is something even more pragmatic: the ability to avoid bugs, and when they arise, debug them clearly. A common place for bugs is in the edge cases, and for Guido those are harder to detect in callbacks than a series of if statements. His logic is pretty sound, and probably generally true for most programmers out there.

He then proceded to give more details, using a memcache-like database as an example. With such a database, there are some basic operations possible:

  • check the cache for a value
  • get the value if present
  • add a value if not present
  • delete a value
At first approach, this is pretty straight-forward for both approaches, with in-line yielding code being more concise. However, what about the following conditions? What will the code look like in these circumstances?
  • an attempt to connect to the database failed, and we have to implement reconnecting logic
  • an attempt to get a lock, but a key is already locked
  • in the case of a failed lock, do re-trys/backoff, eventually raise an exception
  • storing to multiple database servers, but one or more might not contain updated data
  • this leaves the system in an inconsistent state and requires a all sorts of checking, etc.
I couldn't remember all of Guido's excellent points, so I made some up in that last set of bullets, but the intent should be clear: each of those cases requires code branching (if statements or callbacks). In the case of callbacks, you end up with quite a jungle [2]... a veritable net of interlacing callbacks, and the logic can be hard to follow.

One final point that Guido made was that batching/pooling is much simpler with the in-line style, a point I conceded readily.

A Tangent: Thinking Styles

As mentioned already, this caused me to evaluate closely my use of and preference for callbacks. Should I use them? Do I really like them that much? Okay, it looks like I really do -- but why?

Meditating on that question revealed some interesting insights, yet it might be difficult to convey -- please leave comments if I fail to describe this effectively!

There are many ways to describe how one thinks, stores information in memory, retrieves data and thoughts from memory, and applies these to the solutions of problems. I'm a visual thinker with a keen  spacial sense, so my metaphors tend follow those lines, and when reflecting on this in the context of using and creating callbacks, I saw why I liked them:

The code that I read is just a placeholder for me. It happens to be the same thing that the Python interpreter reads, but that's a happy accident [3]; it references the real code... the constructs that live in my brain. The chains of callbacks that conditionally execute portions of the total-possible-callbacks net are like the interconnected deer paths through a forest, like the reticulating sherpa trails tracing a high mountain side, like the twisty mazes of an underground adventure (though not all alike...). 

As I read the code, my eyes scan the green curves and lines on a black background and these trigger a highly associative memory, which then assembles a landscape before me, and it's there where I walk through the possibilities, explore new pathways, plan new architectures, and attempt to debug unexpected culs-de-sac. 

Even stranger is this: when I attempt to write "clean" in-line async code, I get stuck. My mental processes don't fire correctly. My creative juices don't flow. The "inner eye" that looks into problem spaces can't focus, or can't get binocular vision. 

The first thing I do in such a situation? Figure out how I can I turn silly in-line control structures into callback functions :-)  (see footnote [1]),

Now What?

Is Guido's astute assessment the death of callbacks? Well, of course not. Does it indicate the future of the predominant style for writing async Python code? Most likely, yes.

However, there are lots of frameworks that use callbacks and there are lots of people that still prefer that approach (including myself!). What's more, I'd bet that the callbacks vs. in-line async style comes down to a matter of 1) what one is used to, and possibly, 2) the manner in which one thinks about code and uses that code to solve problems in a concurrent, event-driven world.

But what, as Guido asked, am I going to do with this information?

Share it! And then chat with fellow members of the Twisted community. How can we better educate newcomers to Twisted? What best practices can we establish for creating APIs that use callbacks? What patterns result in the most readable code? What patterns are easiest to debug? What is the best way to debug code comprised of layers of callbacks?

What's more, we're pushing the frontiers of Twisted code right now, exploring reactors implemented on software transaction memory, digging through both early and recent research on concurrency and actor models, exploring coroutines, etc. (but don't use inlineCallbacks! Sorry, radix...). In other words, there's so much more to Twisted than what's been created; there's much more that lies ahead of us.

Regardless, Guido's perspective has highlighted the following needs within the Twisted community around the callback approach to writing asynchronous code: 
  • education
  • establishing clear best practices
  • recording and publicizing definitive design patterns
  • continued research
These provide exciting opportunities for big-picture thinkers for both those new to Twisted, as well as the more jaded old-timers. Twisted has always pushed the edge of the envelope (in more ways than one...), and I see no signs of that stopping anytime soon :-)


Footnotes

[1] In a rather comical twist of fate, I actually have a drafted blog post on how to write gevent code using its support for callbacks :-) The intent of that post will be to give folks who have been soaked in the callback style of Twisted a way of accepting gevent into their lives, in the event that they have such a need (we've started experimenting with gevent at DreamHost, so that need has arisen for me).

[2] There's actually a pretty well-done example of this in txzookeeper by Kapil Thangavelu. Kapil defined a series of callbacks within the scope of a method, organizing his code locally and cleanly. As much as I like this code, it is probably a better argument for Guido's point ;-)

[3] Oh, happy accident, let me count the hours, days, and weeks thy radiant presence has saved me ...


Saturday, March 17, 2012

Python for iOS

I do a lot of traveling, and I don't always like to lug my laptop around with me. Even when I do, I'd rather leave it in the bag unless I absolutely need to get it out (or if I'm setting up my mobile workspace). As such, I tend to use my iPhone for just about everything: reading, emails, calendar, etc.

So, imagine my delight, when I found out (just after PyCon this year) that I can now run Python 2.7.2 on my iPhone (and, when I get it, my iPad 3 ;-) ). This is just too cool for words... and given what pictures are worth, I'll use those instead :-)

I've put together a small Flickr set that highlights some of the functionality offered in this app, and each image in the set describes a nifty feature. For the image-challenged, here's a quick list:

  • an interactive Python prompt for entering code directly using the iPhone keyboard
  • a secondary, linear "keyboard" that one can use in conjunction with the main keyboard, extending one's ability to type faster
  • multiple options for working with/preserving one's code (email, saving to a file, viewing command history)
I can't even begin to count the number of times such an awesome Python scratchpad would have come in handy. And now we have it :-) At $2.99, this is a total steal.

Thanks Jonathan Hosmer!

(And thanks to David Mertz for pointing it out to folks on a Python mail list.)



Friday, March 16, 2012

PyCon 2012: To Be Continued

PyCon was just fabulous this year.

It's been a couple years since I was able to go, and I was quite surprised by how much I had been missing it. The Python community is not only one of the most technically astute and interesting ones to which I belong, but also the kindest. That last point is so incredibly important, and it ends up fostering a very strong familiar sense amongst its members.

There were so many good conversations with such great people: Anna, Alex, Guido, David (Mertz), Donovan, JP, Maciej, Allen, Glyph, Paul, Sean... the list goes on and on! Fortunately, I took notes and (and even have some book recommendations to share!) so there are many blog posts to come :-)

But this has brought something into focus quite strongly for me: the interaction at PyCon is one of the most fertile grounds for me all year -- and going without it since Chicago has been a genuine drought! There were some folks at DreamHost that couldn't make it, and we've already started looking around at various local, mini Python conferences that we can attend. This was initially so that those who couldn't make PyCon could receive similar benefits. But now there's something equally important that's contributing to the importance of this search: attending local conferences will mean not as much time has to pass between those fertile interactions and that recharging that we give each other at such events.

Until next time, I hope all Pythonistas everywhere are getting ready for a great weekend :-) Those who have been traveling, I hope you get lots of rest and share with everyone the treasures gathered at this year's PyCon :-)


Monday, March 12, 2012

OpenStack at PyCon 2012 Sprints!

This is just a short post to give a shout out to some folks who are sprinting for OpenStack this year at PyCon. It's a small group, since the Folsom Design Summit and Conference is coming up in a few weeks.

One big surprise came last night when I got an email about Cisco's recent work with Layer 3 (blueprint) support in Quantum, and there were two Cisco folks here this morning to chat about that. Mark McClain (DreamHost) is digging deep into their work right now.

Yahoo! is remote-sprinting today, and they hope to be in the house tomorrow, to continue working on current improvements in DevstackPy. Mike Pittaro (La Honda Research), Jonathan LaCour and Doug Hellmann (DreamHost) are working with Yahoo! on that.

Mike Perez (DreamHost) is hacking on some additional improvements in Horizon for different storage backend representations. We've also chatted a bit about the latest efforts in Horizon for Quantum support (Michael Fork's work). Perez is also helping out tracking some bugs down in DevstackPy.

Special thanks to Mike Pittaro for improving the sprinting pages on the OpenStack wiki with links to previous work and discussions!

If you're keen on OpenStack and would like to dive in with some fellow hackers into the deep ends of Nova, Quantum, or Horizon, be sure to come by or pop in at #openstack-pycon on Freenode :-)


Thursday, March 01, 2012

Successful Hack-In, 01 Mar 2012!

DreamHost has a new core set of cloud developers now based in Atlanta, and a new Meetup group to go with that :-) Today there was a global OpenStack Hack-In, and I just posted a summary to our Meetup discussion page, but since I desperately need to do some blogging, I'm republishing here :-) (with some minor tweaks...)

We had fun in person, there was good chatter on IRC with the Colorado and San Francisco teams, and we had a great time digging into OpenStack some more.

Technical highlights include:
  • testing out development deployments of OpenStack using Vagrant (some successes, some blockers)
  • testing out dev deployments of OpenStack using VirtualBox directly
  • filed some bugs for issues in horizon regarding error feedback to users and how the documentation is generated
  • dug into issues with logging and inconsistencies in datestamps
  • uncovered some weirdness with the usage of gnu screen and hanging services/partial devstack installs due to sudo assumptions (devstack assumes a passwordless sudo, and will label an install as failed if it gets hung up on the apache log tail, waiting for a password, even if the install was successful and all the services started correctly)
  • Doug Hellmann made his first commit upstream to OpenStack
On the non-technical, fun side:
  • Thanks for Zenoss for the fun swag today smile (the Zebras are still staring at me... I think they're going to be making an appearance in ToyStory 5)
  • Even more thanks to Zenoss for the offer to become an OpenStack Atlanta Sponsor (food, drinks, and swag)!
  • Thanks to DreamHost for the AMAZING coffee and danishes from The Village Corner Restaurant/Basket Bakery. Seriously. That was the best coffee I have ever had. In. My. Life.
  • Also, thanks DreamHost for the pizza and the sweet potato pies!


We took a couple snapshots of the event, and I'll be posting those soon on the Meetup page, but for the super-impatient, they're up on Flickr right now smile

http://www.flickr.com...


Monday, November 21, 2011

Occupy's Declaration of Independence

Illustration by Peter Whitley
There's one thing I would really, really love to see under the tree this year -- under everyone's tree: Occupy the World.

For the first time in history, it seems that there might be enough momentum, enough communication, enough strength of individual convictions, and enough mass support to be able to have a world-wide, non-violent, revolution.

Taking the US as an example in this beautiful hope: imagine 2 or 3 million people showing up on the lawns of the US law-making machinery in Washington, D.C., issuing their declaration of independence and simply stating a fact: "Things are now going to change, we will not leave until we have the government that we want."

Far from mob rule, the 99 is intelligent, lucid, and a collection of THE overwhelming majority... regardless of old political parties. An enormous amount of discussion, insightful inspection, exploration of alternatives has been researched, written about, and promoted over the past 5 to 10 years, co-culminating in what we see around us today as the Occupy movement. I have the utmost faith in these thinkers (by which I mean everyone from Gar Alperovitz to my second cousins working in Detroit, MI automotive plants) and their (our!) ability to produce a new constitution that provides for the 99 fairly. Such a new system would stand in stark contrast to that of today's system: one that caters to policies driven by enormous, "legal" bribes or banking systems that continuously steal from the customers and shareholders, running off with the loot.

The 99 is saying it, and has been saying it for a while: "It's time for a change."

They've taken things further by showing an undeniable presence at the scenes of crimes (financial and governmental institutions). I say let's take the last logical step, and fix the problem: let's put a new system of government in place. Let's have radical, peaceful change. Let's do it with poise, grace, and while keeping the benefit of the entire planet foremost in our minds. Let's do it world-wide, and not stop until the job is done. Let's have a revolution.

Let's have the revolution.


Thursday, June 23, 2011

Physical Beings with Digital Lives

2001A Space Odyssey There's a lot that one could say about that title. In fact, it could be the title of a high-volume collaborative blog... That aside, here's the context for this post: books. Books and Reality. And data.
This post got so long that I now need to add a list of sections here, just to make it more accessible. My apologies :-/

Mini Table of Contents
  • Books
  • Books in the Sky
  • Yeah, I Know: Go Social
  • Human Data History
  • Reality Merges
  • Conclusion


Books

I have tons of books. Actual, physical books. Walls of them. Some I use all the time (reference). Some I read once a year (good books that support multiple reads). Others I've only read once, perhaps as far back as high school (when I started collecting). My bookshelves are like a random associative memory array: reading each title or the act of pulling one from the shelf brings back a flood of memories, relived experiences, sometimes actual sense perceptions. It's a powerfully visceral activity.

But that's just my books. When I'm at friends' homes or offices, I cannot keep my eyes off their bookshelves. It's an irresistible compulsion. I linger and browse, often past any semblance of socially acceptable time limits. My conversational replies experience a rapid exponential die-off -- in duration, gaps, and semantic value -- culminating in grunts and finally silence. (My favorite offices to visit so far? friend mathematicians/maths professors!)


Books in the Sky

Oddly, I love books in digital format. I never really got hung up on the bit about not having the paper entity in my hands (though I have turned a digital book reader over, expecting the next page... though that was deep in the plot of a Greg Egan novel!). Traveling as much as I do, I'm in heaven with ebooks. I feel like Superman, carrying around a library with me everywhere I go.

But when I passed a bookshelf the other day on my way out the door and fell under the spell of a book-memory flashback, I realized what was going away as I transitioned to virtual books. And the painful question arose: How am I going to nurture future layers of book-mulch and text-humus with this new æthereal, cloud-bound library I'm building?

How can I share with others, my stacks of books? How will I browse friends' books in their offices, asking about author X and title Y? How will we borrow from each other? What can be done to add this and related missing richness back into our lives once we adopt the virtual versions?

Light-emitting walls that can display titles from your Amazon account? Virtual over-lays visible with wearable/immersive computing accessories? Whatever we end up with, a gimmick isn't going to cut it. It will need to reflect the same depth of history that stacks of physical books have come to represent to us and the collective human psyche since we first started gathers works of the written word.


Yeah, I Know: Go Social

I'm hung up on books here, I admit it. But the same goes equally well for much of what we experience in online social media as well. Everyone's trying to make a buck on people chatting, playing games, reading, etc. Business as usual.

But the problem is that everyone coming up with their own little solution, one piece at a time. Google, Facebook, LinkedIn, Last.fm, etc. "We socialize X." Wow. Good for you. Now, for every activity or group I'm interested in, I've got some tiny little corner of the internet that I need to pay attention to.
Right.

Maybe I'm just an online social idiot, but this isn't working for me. Too many places and pieces. The physical analogy would be me spending all day on the road, hitting all the social hot spots in the Colorado Front Range. Ain't gonna happen. Ever.


Human Data History

The social data scene is a big stick up my butt. I really don't like it there and I'd love to get rid of it. It's poorly engineered, primitive state makes me grumpy. I don't own a thousand hammers [1]; I don't want a thousand of anything that all do basically the same thing [2]. Most of us probably don't own hundreds of houses, either (for ourselves, that is). We keep most of our stuff in the same location or two.

Speaking of houses, let's talk about settlement. How did we choose where to set up camp, towns, etc.? Trade routes, availability of resources (direct physical presence or presence by virtue of trade routes). Are we doing that now on the internet? Are we looking at the analog to fertile valleys, productive rivers, and protected harbors? Whose priorities do we have in mind? As we set up virtual presences, are we in locations that benefit businesses? Or ourselves?

If we choose the latter, the businesses will come because the people are there. If we do it the other way around, we'll be looking for new virtual homes if the businesses close shop or change the rules too much.

Coming back to data (but on the same anthropological note), historically we've had distinct divisions of our data:
  • the secure location of our huts/houses/castles
  • what we presented about ourselves in adornment/fashion (public data)
  • what we could carry with us in bags/crates/vehicles
Because of their prevalence in our history, any attempt at realizing personal data in a virtual environment would do well to reflect on these. We're naturally already predisposed to such approaches; such divisions are things that anyone can intuitively grasp.

The problem is that we're all used to a single platform: our mutually agreed-upon reality. There's no such thing online yet. And if there were, who would own/run it? Monopolies are eventually overthrown. We hate them. So how do we get around this?


Reality Merges

This very naturally led to thoughts on digital lives in general. And this is more than just a question of usability or human-computer interaction. Rather, this is a question that borders on the metaphysical: how do we solve the problem of syncing divergent realities? Reality-reality interaction.

The problem shouldn't be minimalized by analogy: this isn't a "simple" matter of ensuring that the data in my address book on Google is the same as what's on my iPhone. My self-perception, many reminders of self-reflection, etc., take place as a result of various interactions I have with my surroundings: both real[3] and virtual. No problem. Except that the things that remind me are also things that others can see and interact with as well. Often, they will have associations that spark a neural cascade for them too.

I've had many conversations take place around objects in a shared environment where the name of the object was never mentioned, it was implicitly understood. When there's no shared object (or concept), we have to name the object, define it, share some basic associations, make sure that we're talking about the same thing, etc. Thats all prelude. Only with that done, can we have genuine communication take place around the given concept.

Now rinse and repeat for everything you want to talk about that revolves around or is at least related to something that exists virtually for you and isn't part of your shared, physical environment.
I can't imagine many useful general solutions to this. In fact, I can only imagine one (given our biological wiring): use what we know (in our bones) and overlay or augment our visual reality with another.

With augmented shared realities, there's no platform. You just need hardware that runs it and senses that can perceive it. Just like reality. At that point, we can start sharing what we want, allowing access to data about ourselves and what we like by dumping it into a shared perceptual space, regardless of the original data source. Merged.

Obviously, we're not there yet. We're going to need crazy improvements in mobile technology, storage, computer vision, etc. But once the technology catches up, I think we'll see some powerful needs being filled. And we might start coming out of the internet dark ages...


Conclusion

None of this is new; Pick up any number of books by Charlie Stross[4], and there's all sorts of fun to be had by exploring his ideas. But the point of this post wasn't to be new. While we're all busy enjoying the latest fad in social media, I think it's important we think about where the progressive succession of fads is taking us. At each point, there's a natural next step (more accurately, set of possible next steps). Let's look more than just one in front of us and let's not forget what our biology has made us. We may not be able to engineer truly wise decisions about our future, or even make our lives better/more efficient. But it would be nice if we could at least not make things worse :-)


Footnotes

[1] I think I have three, actually.
[2] This is one of the reasons I'm a big fan of http://ping.fm.
[3] Here I mean "real" in the "conventional" (shared) reality sense of Mahdyamikas. Ultimate reality... well, that's a topic for an entirely different sort of post...
[4] Check out his Amazon page: http://www.amazon.com/Charles-Stross/e/B001H6IW0Q/. Accelerando is probably his most praised book (and likely my favorite), but the ideas touched on in this blog post are explored in other works of his, most notably Glasshouse and Halting State.


Tuesday, June 21, 2011

txStatsD Preview

Sidnei da Silva (of Plone fame) has recently created a Launchpad project for an async StatsD implementation. He's got code in place for review by any Twisted kingpins who'd like to give it a glance.

statsD was originally created in 2008 as a Perl implementation at Flickr for their statistics counting, timing, and graphing needs. Engineers at Etsy ported this work to Node.js (which Sidnei based his version on). A few months ago a regular Python implementation was created (also based on Node.js).

More than another (excellent) addition to the tx family, txStatsD will provide folks with the luxury of collecting stats using a Python server without having to write any blocking code :-) Sidnei also implemented a graphite protocol and client factory for passing the messages along.

Enjoy, and let him know what you think!

Wednesday, June 08, 2011

The Future of Personal Data: A Followup

The Next Step

A few years ago, I wrote a post about the Future of Personal Data as a result of all the ultra large-scale systems reading and exploration I was doing. Google was foremost in my mind when writing that, but Apple has since come into the spotlight here as well.

Recently, Matt Zimmerman has decided to leave Canonical and join forces with Singly, an exciting startup company focused on secure user data storage and the socialization of (and development around) that data. In this blog post, the following core values were given about the software underlying Singly:
  • I own my personal data
  • I want my data to be useful to me
  • I make the decisions to protect or share my
I would like to see the following added:
  • I make the decisions on how my data is used
  • If my data is sold, I should get a return for this
This is not petulance speaking :-) This comes from a historical perspective on social and economic fairness. Witness the changes in individual rights and personal finance since the industrial revolution...


Fair Market Value

There are probably many solvent entities out there who would claim users are already getting a return for the use of their data: free email and office docs from Google, free cloud services from Apple, etc. But I would imagine that there are massive margins being made on user data, and services (as valuable as they may be) are a paltry return for such a gold mine. I cannot help but be reminded of the selling price of Manhattan Island or Alaska (in 2011 values, the Lenape Indians got around $29.61/square mile; the Russians got about $150.77/square mile).

Money makes a good point, but personal data (and this post) isn't about the almighty coin. This is about clearly defining who owns what and ensuring that those who don't want to be taken advantage of, aren't. This is about identifying exploitation, and building something better and longer-lasting in its place.


Who's Going to Pay for What?

In David Pakman's "Disruption" blog post about Singly, he makes the following comment:
"I cannot see consumers getting into the business of selling their data to marketers so they can see personalized advertising. Instead, I believe marketers will be encouraged to offer value to us in exchange for access to our data."
I have to agree with him... but I can only offer a qualified agreement. True, I find it hard to believe that users will be selling their data directly to marketers. From the user-side, the pain of inconvenience would not likely be worth the payoff; at the marketer end, individual data is useless, and munging an in-house-built collection would incur a lot of overhead not part of their core business.

However, users' data stored in lockers, updated regularly, pre-processed, has enormous value in the market. Right now, Apple and Google are making eye-crossing amounts of money from data just like this. Again, I would imagine that this data is only really valuable in large quantities and for interesting, identifiable demographics

If users provide their data, but in exchange only get a "nicer app" or a "useful utility" I'm going to cry "foul!" (unless someone can show me the actual numbers involved and unequivocally prove that fair exchange is occurring).


You Say Disruption, I Say Revolution

Instead, if a service such as Singly, were to offer a co-op style dividend payment system to all of its users, that would seem to be much more fair. Not only that, it would be the beginning of a market revolution. This is not to say that co-ops are some perfect economic model, but rather that the data we, as users, generate is of immense value. The more that Singly has, the greater potential for revenue. The more buzz that builds around Singly users generating revenue from their data, the more users they get. With mass-adoption, a new sub-economy is born.

Perhaps a better model than co-op is that of a mutual fund investment firm. Each Singly user has a portfolio of data. Depending upon each user's preferences, any or all of that data could be used by Singly to generate revenue. Some groups of users will generate more than others, and users in these groups would get greater returns.

Whichever analogy you prefer, with a little exploration it seems fairly clear that opportunities for a large payout are present. For instance, I like to imagine a world where entities like Apple and Google can't harvest user data, but must go through brokers whom users have given their permission to sell their data for the most profit. I also imagine there's a lot of lawmaking that would have to take place... and even more lobbying.

Even with that, Google and Apple would still make money hand over fist (or they'd become brokers themselves) -- enough to continue providing free services. Yet at the same time, users would be in control of their data; they'd be financing (or financially augmenting) their data-consumption lives with said data.

With the right press coverage, Singly could find themselves not only swamped with a massively growing user base, but at the very center of a new economy. With the right level of negotiation and coordination, businesses could buy into this new paradigm without losing their shirts in the disruption.


A Plea

In summary, I applaud the goals and vision of Singly. I, for one, would deeply appreciate writing applications against their data locker (to any Facebook or any of its dubious applications), where a user's rights are clear and protected. That being said, Sinlgy would have my eternal allegiance if they also took up the cause of rights for user data in the market place; if they helped transform the current nascent data economy into a world economy capable of achieving as-yet unimagined financial heights.

And if not Singly, my loyalty would be given to whomever did do this.


Monday, May 23, 2011

Packt: A Publishing House for the Future

Since I first heard of them several years ago, I've viewed Packt as the underdog in the world of technical book publishing. In the past year or so, Packt seems to have gained greater and greater influence: their catalog continues to grow, they are attracting talented and knowledgeable engineers as authors, and their titles are things that I'm actually interested in.

Two examples of this are the books Expert Python Programming and Zenoss Core Network and System Monitoring. I received a copy of the former and blogged about my take on it. For the Zenoss book, last year I agreed to be a technical reviewer and am currently preparing a blog post on my pre- and post-publishing experiences.

In both cases, I agreed to work with Packt based solely on the technical merits of their works. However, my experience as a technical reviewer with them was so positive (I have had consistently excellent experiences with their staff over extended periods of time and on long-running conversations) that I have not only agreed to review more titles, but have read up on Packt themselves a bit. Here are some highlights from their wikipedia article:
  • They published their first book in 2004 (the same year Ubuntu started!).
  • Packt offers PDF versions of all of their books for download.
  • When a book written on an open source project is sold, Packt pays a royalty directly to that project.
  • As of March 2008, Packt's contributions to open source projects surpassed US $100,000 (I would love an updated stat on this, if anyone has a newer figure).
  • They went DRM-free in March 2009.
  • Packt supports and publishes books on smaller projects and subjects that standard publishing companies cannot make profitable.
  • Their stream-lined business model aims to give authors high royalty rates and the opportunity to write on topics that standard publishers tend to avoid.
  • Bonus: they also run the Open Source Content Management System Award.
These guys have some keys things going for them:
  • They've got what appears to be a lean approach to business.
  • They know how to effectively crowd-source, keeping their overhead low.
  • They are rewarding both the authors as well as the open source projects.
  • Their titles continue to grow in diversity and depth.
  • The have an outstanding staff.
Oh, and I really like the user account management in their website! When I log in, I see a list of owned books, source code links for them, clear/clean UI, very easy to navigate. I can't emphasize this enough to vendors, service providers, etc.: if you want a loyal user base:
  1. make a good product that lasts a long time;
  2. make simple and great tools that enhance the experience of those products, that truly improve the experience of your users.

All in all, Packt really appear to be leaders in publishing innovation, taking lessons learned from the frontier of open source software and applying that to the older industry of publication production. I would encourage folks to evaluate Packt for themselves: if you like what you see, support them in readership and authorship :-)

I, for one, will continue to review titles that appeal to me personally and that I think others would enjoy as well. I have two books in the queue and three pending blog posts for the following titles:
And who knows, if I feel like writing a technical book at some point, you may see me in the Packt catalog, too ;-)