Wednesday, December 27, 2006

Blogging with ecto


Mark Hinkle has been exploring Flock (Mac OS X) for use with the forth-coming Zenoss blogs, and that reminded me of a brief conversation with radix about a year ago where we were discussing blog editors. Thus re-motivated, I did a little research and evaluation of blog editing/posting software. The one that seems to be the most complete and featureful (to me) is ecto. Getting it set up with the new blogger seems a little awkward (the default "access point" doesn't work), but there's another version in the works where the kinks will be worked out (or be moot).



Here's what I had to do to get it (version 2.4.1 Universal + blogger beta support) to work with the new blogger API:


  • Hit the "Add Account" button on the Accounts window
  • Give my blog URL, as requested
  • Set the System as "Blogger"
  • Set the API as "Atom"
  • Set the Access Point as http://www.blogger.com/feeds/default/blogs
  • Be sure that all the RSS feeds are "full" (blog settings in Blogger admin UI)

ecto was then able to get a list of my blogs for various account names as well as recent entries for each one.



One problem I have run into, however, is the multiple accounts thing. I have to quit ecto and restart it in order to post to blogger under a different user name. It seems to be caching login info and not using the user name and password that the particular account is configured with.



Despite that minor inconvenience, I will probably purchase the full version of this software and begin using it extensively. The single driving motivation for this is the multiple account management feature: seeing all the accounts and blogs I have within one application acts as a reminder; I will be more likely to post to my other blogs now :-)


Technorati Tags: , ,

Tuesday, December 19, 2006

Back at Zenoss HQ

For part of this week, I've been flown back to Annapolis, MD for a few days, ostensibly for the holiday party, but in addition, to coordinate in-person with Mark Hinkle and Rusty Wilson on the infrastructure and code needed for the next 6 month community push. In an effort to jump-start a whole set of community tools that Zenoss wants to provide, I've finished up some pretty sweet (yes, unit tests are sweet and so it automated web application/form processing/monitoring) additions to Zenoss Core and I am now switching gears to integrate community communications and work on the Super Secret Zenoss.net project (lots of Zope3 + Five + Plone hacking). Pretty fun stuff.

It's been really cool being back with the Zenoss core team this week -- lots of changes have happened since August: additional (and pretty sweet) office space, lots more people, lots of awesome code being produced by Erik Dahl, Eric Newton and Chris Blunck that I haven't had a chance to see until now. There a bunch more stuff that I can't let out of the bag yet, but is most excitimentful. Well, it is if you're into monitoring and systems/network management... and the software that helps you do that :-)

Technorati Tags: , ,

Sunday, December 17, 2006

Google Projects adds Wiki Support

Well, Google has supplied the last little piece of functionality that will allow me to move all of my projects onto their infrastructure.

The best part about it is the integration they've done with svn. I've rigged up rsync scripts in the past when working on SourceForge projects that allowed me to maintain the content in svn. With Google projects, I now get that for free, without any extra overhead or maintenance.

To test this new feature, I began migrating content from the old pymon project page to the hosted project page on google. The wiki syntax is the one many of us have been using for years -- first with MoinMoin and then with trac. Everything worked flawlessly.

The only minor bone I have to pick is the view on the wiki page -- it lists the pages instead of taking one directly to the main page. The listing is good, but I'd prefer an actual page as the default. I've added a link on the main project page to the WikiStart page, though, and that helps a bit. Another nice-to-have would be an automatic "return to parent: ParentPage" link generated at the top
of all new child pages.

All in all, I'm quite excited by the new feature and am looking forward to no longer having to host my own trac/svn anymore!

Technorati Tags: , ,

Monday, December 04, 2006

Twisted Web Site Gets a Facelift

I'd like to take this opportunity to invite folks to the new super lucky special auspicious yum-yum Twisted web site. And to offer massive grattitude to Huw Wilkins for donating the design and his time to this much-needed effort. As most people know, the Twisted site has been the ghetto boy for a while now, we're all quite delighted to have made it out of The Projects.

We've still got a large number of tasks on our collective plate to polish everything up, and there will doubtless be new issues that need addressing. However, we will all be much happier looking at the site during the on-going work :-)

Technorati Tags: , ,

Wednesday, November 01, 2006

Mac OS X --> Solaris via Serial

My google foo was pretty low tonight, and I had a hell of a time connecting to a Netra 240. After lots of searching, trial, and error, I'm now in like Flynn. Here's what I ended up using:
  • Sun Netra 240
  • Mac OS X 10.4, PowerBook G4
  • Keyspan USB/Serial adapter (P/N: USA-19HS)
  • Keyspan drivers
  • Sun-provided RJ-45 serial cable (I think the part number is 530-2093-01)
  • minicom (installed via Fink)
Instructions for getting connected with minicom:
  • Start minicom from the command line
  • Hit "ctl-a" an then "z"
  • Hit "o" to adjust the configuration
  • Down-arrow to the "Serial port setup" menu and select
  • Make sure that you're pointed at the right serial device and that your speed/parity setting is "9600 8N1"
  • Hit "enter" to save and go back to the menu
  • Select "Modem and dialing" and change the "Init string" to "~^M~"
  • Hit "enter" and select "Save setup as dfl", exit, and restart minicom
Here are the gotchas I ran into:
  • be sure to reboot OS X after installing the adapter drivers
  • be sure to use the right cable (I was using a Cisco RJ-45 serial cable, and it seems to be pinned differently)
  • be sure to use minicom (I tried with ZTerm, but I just don't grok GUI apps)
Once all these were in place, it was a piece of cake. With a BREAK (no keyboard for the Netra, so no "Stop-A") during boot-up, I'm ready to wipe this puppy and install Solaris 10.
Damn, Sun makes solid hardware. It's been years since I've played with their stuff. Even if Python apps don't run so well on Solaris, it's still going to be good to use these machines again.

Update: Added step-by-step instructions for connecting with minicom.


Saturday, October 14, 2006

Nevow for Rapid-Deployments

I think I remember Guido making the comment that Twisted and/or Nevow scared him. I also think that (ignoring the causal effects of his statement) many people feel the same way, and if those people saw this blog post, they'd say I was deluded.

I've been using Nevow (on and off) since fzZzy first presented it at PyCon2004. I've been blogging about it since a year after that.

My first "web framework" was shell-script-generated HTML in the mid-90s. Then I found PHP. Zope. Perl (Interchange and Mason). Plone. Then a whole string of custom solutions.

I tried Django when it first came out, and hated it. I tried it again recently, and was most pleased with the progress they've made. I'm using it on a project with some non-programmer folk who wanted to convert their application from PHP to Python (I couldn't sell them on twisted). After digging around in the Django code base, I can safely say I will probably enjoy myself while working on the project.

I've also been a fan of the z3 CA for a couple years, and disagree with much of Ian Bicking's recent assessment of it (and agree very strongly with Martijn's comments). I continue to do work with z3 and enjoy almost all of it (being able to pick and choose helps greatly).

But when it comes down to it, when I need to roll something out fast, and I can choose the framework without concern for such things as technical support, community popularity, buzz, or political considerations, I choose Nevow. Case in point, the project I mentioned in this post. 30 minutes to casually convert a project from z3 to Nevow.

Though important, development speed is not the only consideration to make. Everyone talks about how fast you can roll a project out in web framework X, etc. But there's something that has a much higher priority for me: how insane will it let me be? Can I do anything I want? Once it's built, can I plug "stuff" in and out? Can I make unexpected changes easily and quickly, without compromising the integrity of the architecture? And even more: can I build my own system(s) with it?

And that's where Nevow cinches it for me. I am the most comfortable with it's design, templating, internals, and programmer freedom. I feel I have a little more freedom and flexibility with it than I do with z3. Nevow provides me with the tools and comfort level to build my own systems easily, quickly, and extensibly.

As an example, take this work in progress.

I've built a couple game-world oriented sites in the past. The first one used PHP and the second two used Plone. In both cases, it was very difficult to easily manage what I wanted to manage. I learned a lot of lessons about how information management works for me. When I started working on Myðgarður, I put these lessons into effect. Nevow let me do that.

I have a highly customized brain that needs things done in a certain way in order to be maximally productive. I think lots of brains are like that ;-) I want a framework that reflects my brain's needs, not only to deliver a result quickly, but to deliver it in a way the fits me best... and can adapt to the future best.

As a side note, Nevow is going through an overhaul (more) right now that will not only improve its general efficiency, but will actually fit my brain even better. When the context-less Nevow is released, I plan on producing at least one screencast on how to go form 0 to 60 with it in less than 20 minutes.

So stay tuned...

Technorati Tags: , , , , ,

Friday, September 15, 2006

A New Experience

As I told Bill Karpovich today, I don't think I've ever had this happen before: add a feature to software and then have someone on a major news site blog about it.

To be fair, the Zenoss objects-to-XML (and vice versa) code was written quite a while ago by Erik Dahl. What I finished this week was providing a means of exporting Zenoss templates (RRD and Nagios) and then importing them from the file system or a URL (as well as some initial UI plumbing).

The code may sound mundane, but when you have a highly active, intense, and enthusiastic user base, writing code that makes their lives easier is a profoundly satisfying activity. Because, inevitably, you get emails and IRC comments like "you guys are awesome!" and "we love you!"

The next level of deployment will likely include a dedicated mail list for users to share the systems management/monitoring templates (and download them directly into Zenoss). This will be a transitional feature while we continue working on the portal where users will have the ability to upload, edit, and publish (share) their templates, and then download these templates through the Zenoss management UI.

I'm having a great time with these guys -- an awesome team with an incredible product and an amazing community :-)

Now playing:
Muse - Darkshines

Technorati Tags: , , , ,

Monday, September 11, 2006

Work Break

Tired of coding twisted? Need a break to relax? Play a game...

Now playing:
Muse - Butterflies and hurricanes

Technorati Tags: , ,

Tuesday, September 05, 2006

Python and Monitoring

One of the first things I did when I got back home on Sunday was check the laptop I left at home... and sort the 1800+ emails, spam a-taunting. Of the odd hundred or so that were valid communications, one was from a friend with a heads-up/contact about RedHat's new effort to break into the burgeoning monitoring market. Apperantly, they are looking for python talent with coders having a proven background in providing systems management and monitoring solutions.

I find this interesting not only because of my long involvement in the monitoring arena (including the pymon and CoyMon projects), but also because of my recent work with Zenoss and the contact I have had with competitors in the field over the past year. My interest in architecting efficient and fast monitoring systems led me to develop pymon in twisted. This was, in part, a reaction to my not-so-happy experience with NetSaint/Nagios. On the other hand, CoyMon was positively inspired by Cacti, with an aim to give similar usability to NetFlow data. I have found it instructive to discover the various impetuses that drove me to develop monitoring solutions and to find what inspired others to do the same. I wonder what is driving RedHat. Business need? Business opportunity? Chronic issues with their current system?

On a side note, I did some more work on pymon this weekend, and I am really happy with the way the internals of it are progressing, primarily
  1. how damned fast it runs, and
  2. how easy it is to setup and configure.
Most of all, though, I find the design increasingly flexible and elegant. There's still some horrible crud tucked in the older crevices, but that stuff's on its way out.pymon is not explicitly systems management software, but rather a service-checking application. A part of me is curious, though, about what could be built using pymon's design principles. There could be some interesting possibilities to explore there.

Anyway, back to the topic: Systems monitoring and management a huge market and it is getting a lot of development effort and investment attention. When the dust settles in a year or two, it will be most interesting to see where all of this leads and what code bases/companies are left standing.

Now playing:
Muse - Ruled by secrely

Technorati Tags: , , ,

Roadtrip

Sunday I returned from a two-week road trip through Wyoming, Montana, Idaho, Washington, Utah and then back to Colorado. Though the scenery was amazing, the best part was the actual purpose of the trip: seeing friends, some of whom it has been 15 years since we last hung out.

Of the many very cool things that happened on the trip, one of the things I can share in a public forum is the RPG experience I had in Pullman, WA visiting my dear friend Jim Roach. He and his good friend Jacob DMed/GMed two sessions each and it was just phenomenal. Jacob's was a Ravenloft campaign I got to join with a 54-year old character named Li Po (monk) on a journey to Romania in the late 1800s. The other was a GURPS character -- a kilt-clad goblin samurai. I haven't gamed with sessions like that in over 10 years, and it was so much fun I will probably joining them again via web cam.

Upon my return to Colorado, I had a new Sector 9 bamboo longboard waiting for me. Two days and no spills. I'm totally not into tricks, but damn, I love to cruise and slalom. I've had a delightful time getting reacquainted with my skating muscles, though my body currently disagrees, as I can barely walk up the stairs right now.

Now playing:
Muse - Plug In Baby

Technorati Tags: , , ,

Monday, September 04, 2006

Knights of Cydonia

This is the BEST VIDEO EVAR.

How can you possible beat it? It's got kung fu, tai chi, spaghetti western, gun fighting, post apocalyptic America, lasers guns, holograms, and rock and roll. Take the best parts of the 70s (pop-culturally and ideologically) and pack as much of that as you can into a 6-minute video, and this is what you get. It's the music video equivalent of this poster.

And it's now responsible for making me a Muse fan.

Technorati Tags: , ,

Sunday, August 13, 2006

Python and kcachegrind

I've recently needed to profile some very subtle issues that cropped up in a customer's python application. However, when I tried to use hotshot, I consistently got tracebacks. After some digging around on the net, I saw folks saying that profiling is basically busted in python2.4 (and then I remembered Itamar saying basically the same thing at PyCon 2006 when we were looking at web2 slowness).

To get around this, I built python2.5 from svn and copied its cProfile, _lsprof and pstats files to my python2.4 libs. This was a complete desperation move and I totally didn't expect it to work -- but it did (with only a warning about a version mismatch).

Earlier this year, JP and Itamar updated an lsprof patch to work as a standalone. However, I've never done any profiling in python, so it took a few minutes to get up to speed. Looking at the patch source and the python2.5 cProfile docs and then doing the usual dir() and help() on cProfile.Profile in the python interpreter is what helped the most.

To give others new to profiling a jumpstart, I'm including a quick little toy howto below.

Import the junk:
>>> import os
>>> import cProfile
>>> import lsprofcalltree

Define a silly test function:
>>> def myFunc():
... myPath = os.path.expanduser('~/kcrw_s')
... print "Hello, world! This is my home:"
... print myPath
...

Define a profile object and run it:
>>> p = cProfile.Profile()
>>> p.run('myFunc()')
Hello, world! This is my home:
/home/kcrw_s/kcrw_s
<cProfile.Profile object at 0xb7c87304>

Get the stats in a form kcachegrind can use and save it:
>>> k = lsprofcalltree.KCacheGrind(p)
>>> data = open('prof.kgrind', 'w+')
>>> k.output(data)
>>> data.close()

You can now open up the prof.kgrind file in kcachegrind and view the (in this case, very uninteresting) results to your heart's content.

Technorati Tags: ,

Sunday, July 30, 2006

Twisted Mail Server


Update: The saga now has a conclusion! See this blog post for details.

Well, these last two months have been hell. Working on sucky projects just plain sucks. It ate up so much of my time that I've got an unhealthy backlog of blog posts waiting for release into the wild.

The first post I am compelled to write is on my new mail server. In the past, I've used sendmail, qmail and postfix. I have not been happy with any of those (though I did really like qmail, until it got too cumbersome to keep it updated). I didn't have the level of control I wanted -- and this was solely because I couldn't fit my brain into those applications (though, again, qmail came the closest).

I decided to chuck it all, and write my own mail server in twisted, using the pre-built lego code that twisted offers for this sort of thing. I've been running the server for about a month now, and all I can say is "wow". Just WOW.

The level of control I have over the operation of my mail server is insane. I can get this thing to do exactly what I want, when I want. I've got a bazillion domain names for which I (or others) receive email. I was able to write the code that lets me handle that the way that makes sense for ME (and *not* the author(s) of Postfix, etc.).

Today, I needed to add support for aliases that were actually lists of recipients. One "if" statement and an additional implementation of smtp.IMessage later, it was operational. I don't know how I ever ran a mail server any other way.

I've been testing my mail server all month, and it's running beautifully. It has continued to be free of relay issues and spammer attacks. I couldn't be happier with the results.

Now that I am feeling more secure in the custom code, I'm ready to start adding additional features I need:
  • white listing: automatically updated with the address of people to whom I send email
  • black listing: I am starting to maintain a list of the most heinous offenders in my junk mail box; these will be regularly pushed to the mail server
  • greylisting: I have begun planning an implementation of greylisting, but this will take some time to get right
  • spammer databases: I am considering using one or more of these. My only problem with them is that I don't trust them. I don't want to block someone inadvertantly just because they were unlucky enough to have one of their boxes raped into becoming an open relay 3 years ago.
Having a mail server that runs on twisted seemed a little daunting at first. I feared maintenance and security burdens, however, I have already begun reaping the benefits and my fears have been shown to be baseless. I spend 1/10th the maintenance time on the twisted version. I have *fun* while updating the server or configurations. I can plug my own code into it instead of using third party applications I don't like or patches I don't understand.

My first exposure to twisted was late 2002 as I was writing my first "real" python script (a networking script, naturally). Since that time, twisted has integrated itself into my life such that I can't imagine living without it. I literally use it for all of my coding activities: my professional life depends upon it nearly 100%, and 50% of my entertainment is derived from programming activities, all of which incorporate some aspect of twisted.

I, for one, welcome our twisted overlords.

Now playing:
Bagpipes - Flow Gently Sweet Afton

Tuesday, June 20, 2006

Trac Spam

It seems I was the last to find out about the "trac spam" phenomenon. I've got a bunch of projects that use trac here, and as a result, I've been cleaning up the huge mess these bastards left on the wiki. Gotta love page history, though...

The pages that are most viewed are the ones with code examples on them, and the spammers were kind enough to delete that code, making the pages useless. The code is now restored and can be referenced again. In particular, the following projects were impacted most heavily:


I've now got the wiki set for auth users only. If you want to contribute to the wikis, just send me an email.

Technorati Tags: , ,

Monday, June 19, 2006

Nevow Needs a Screencast

Nevow needs a screencast. Because it's so damned hot and people need to see it in action. They need to see how frickin' amazing it is.


Case in point: on a *whim* I decided to migrate a site I built last year in z3 to nevow. I had a spare hour, so what the hell. It took me *minutes*. And I don't mean 6487 minutes -- I mean 30 minutes. Here's what I did:

  • Copied HTML output from the other site and used as the basis for an XHTML template

  • Copied CSS and images from the other site

  • Wrote a backend data store that uses RFC 2822 (python's email package)

  • Wrote a ui.UI class, subclassing nevow.rend.Page (including children, data calls, etc)

  • Added data and render calls in the XHTML template

  • Wrote a .tac file

  • Populated 5 RFC 2822-compliant files with data and headers for the content

  • Started up the server and watched the new site scream



Let me emphasize a crucial point here: *none* of that was stubbed. No helper-scripts. No installers. There was pre-existsing HTML, CSS and images, and that's it. I did some dicking around and tweaking in the process, so I'm *sure* I could do the whole thing -- from start to finish -- in less than 20 minutes. Probably 15.

I don't understand why more developers aren't using Nevow. It just kicks ass. Yeah, every platform has its issues, and Nevow's no exception. But DAMN. I get record-setting development times with it, it runs fast, and it's insanely flexible.

Technorati Tags: , , ,

Saturday, June 03, 2006

Sets as an Elegant Alternative to Logic

python :: programming :: math



A few years ago I was working on a sub-project of CoyoteMonitoring
where we were writing a database "layer" for the file system,
essentially allowing us to query for files using standard SQL commands.
We needed to do some exclusionary logic for filtering bad user input.
Though mathematically equivalent, I deal with Set Theory and Logic very
differently: I'm a set guy, and I really hate using series of if/else
statements for problems that lend themselves easily to an approach that
is "set theoretic" (we're talking super-mundane set stuff -- nothing
really sexy).

On IRC, I was gently preparing a fellow project developer for some code
changes I had made that eliminated a series of ugly ifs:

 
[03-Oct-2004 02:02] you know, we could do this really
quickly with sets :-)
[03-Oct-2004 02:02] check this out:
[03-Oct-2004 02:02] lets say our table has 4 fields:
[03-Oct-2004 02:03] ['A','B','C','D']
[03-Oct-2004 02:03] and we're doing a select for ['D', 'A',
'C']
[03-Oct-2004 02:05] and lets say we've got a bad query
['Z', 'B', 'C']
[03-Oct-2004 02:07] then, if we perform a set difference on
select
tables against defs, the returned set will be zero
[03-Oct-2004 02:07] and if we do the same with bad against
defs,
we'll get a non-empty set, in this case a set with one element, 'Z'


I then shared some python 2.3 code (no built-in set object) that
basically illustrated the changes I had made. For me, the results were
much more readable. The code was certainly much shorter.

Since then, I've found all sorts of uses for this approach. I've
recently made use of this in a Nevow project that has to authenticate
off of a Zope instance. There are a series of group and role names that
can be qualified under one of three "group-groups" (tiers):


# setup sets for each class of user
admins = set(cfg.user_database.admin_roles)
libs = set(cfg.user_database.tiertwo_roles)
vols = set(cfg.user_database.tierone_roles)
# and then a convenience collection
legal_roles = admins.union(libs).union(vols)

Later in the code, we do checks like this:


if admins.intersection(avatarRoles):
return IAdministratorResource

I like this much more than explicitly checking for the presence of
elements in lists.


Baz Camp, Day 1

hacking society :: programming :: colorado



Well, the folks at tummy.com have done
it again :-)
Baz camp is rocking. We're right on
the lake, with beaches and 90-degree weather in Loveland, CO... so you
can imagine that the view is most agreeable :-) We not only have
wireless access donated by the most gracious local provider
LP Broadband, but some of
the boys brought a big-screen projector for outdoor movies at night as
well as satellite TV. So yeah, we're *really* roughing it.

We are certainly roughing it more than the folks at Foo and Bar Camp!

We've got a camp cook who's making meals for us (including 1:00am
chili!) and the menu has been just great. We've got shaded gazebos up,
tents. Watched Primer and Hackers last night (Hackers was enjoyed a la
MST3K). Sean shared Icelandic treats he brought back from the Need for
Speed sprint, including fish jerky. Yummy.

I also happen to be getting a lot of work done, too :-) All offices
should operate like Hack Camp.

Hey Radix: why aren't you here? Evelyn Mitchell and I were talking
about fellow social geeks and agreed that this would be just your style
:-)


Sunday, May 07, 2006

Twisted JSON-RPC TCP Proxy and Server

twisted :: python :: internet






Yesterday I finished the Twisted JSON-RPC server and proxy for TCP. I
decided to go with the
Netstring protocol
for its simplicity and the security of declaring and limiting string
length.

Usage is very simple and almost identical to the HTTP-based Twisted
JSON-RPC usage. Here is some server code:


from twisted.application import service, internet

from adytum.twisted import jsonrpc

class Example(jsonrpc.JSONRPC):
"""An example object to be published."""

def jsonrpc_echo(self, x):
"""Return all passed args."""
return x

factory = jsonrpc.RPCFactory(Example)
application = service.Application("Example JSON-RPC Server")
jsonrpcServer = internet.TCPServer(7080, factory)
jsonrpcServer.setServiceParent(application)


And for the client:

from twisted.internet import reactor
from twisted.internet import defer

from adytum.twisted.jsonrpc import Proxy

def printValue(value):
print "Result: %s" % str(value)

def printError(error):
print 'error', error

def shutDown(data):
print "Shutting down reactor..."
reactor.stop()

print "Making remote calls..."
proxy = Proxy('127.0.0.1', 7080)

d = proxy.callRemote('echo', 'hey!')
d.addCallbacks(printValue, printError)
d.addCallback(shutDown)
reactor.run()

For a slightly more complex example (with subhandlers and
introspection) see:
the wiki.

Next on the list? For a truly secure solution, I am exploring the use
of the
Twisted
Perspective Broker
for JSON-RPC.


Now playing:
Yoko Kanno & The Seatbelts - Forever Broke

Friday, May 05, 2006

More Twisted and JSON-RPC

twisted :: python :: web






A recent comment on my
Twisted
JSON-RPC blog entry

put this near the top of my queue. I posted a comment in response to
the request for a download link (see my
comments
about that).

Please read those comments before downloading this, so you are aware of
the connection (or lack thereof) to official twisted code, as well as
my reservations on the matter. In addition to providing this link, I
will do the following: create a twisted.web2 version of the JSON-RPC
code this weekend, make an attempt to define a sensible twisted
protocol for JSON-RPC over TCP, and (time-permitting) begin work on a
twisted TCP JSON-RPC server and client. I have created a project space
for this work
here.

Update: There is now a twisted.web2 version of the JSON-RPC server available on the project page (all unit tests pass). There is an example server.tac file you can run as well as an example twisted client. Some time later tonight -- or rather, very early this morning (2006.05.06) -- I will put some eggs up and provide a link on the project page.




Now playing:
Glenn Gould - The Art of Fugue, BWV 1080: Contrapunctus XI (a 4)

Friday, April 28, 2006

Group Access in Twisted

This is a rant -- a positive one. twisted.cred is freaking brilliant. I've had to use it in the past to write my own credential checkers, so I've dabbled a bit. I was thrilled then because of the ease with which I was able to glue systems together. But tonight, I needed to add last-minute support for group access control to a twisted/nevow application and nevow resources that use JSON-RPC. The customer now wants different page views/menus for different classes of user; in addition, they have a new set of RPC
methods that should only be accessible to privileged users.

Typical nightmare situation, when it comes to last-minute tasks, right?

Not with twisted.cred, it isn't. Basically, all I had to do was create an interface for each group that needed to be represented. I then did the following:
  • updated the function that instantiates the RPC parent and subhandlers, instantiating the right ones based on the passed interfaces
  • updated the avatar realm to choose the correct interface for a given group type
  • subclassed the root page for each group that needed a different page
I didn't have to touch the credential checker since it was already getting the group info (I *knew* the customer was going to ask for something like this, even though it wasn't in the reqs).

The interfaces, a few methods (implements/implementer, providedBy), and the amazing functionality provided by twisted.cred -- that's all that was needed. I've never written my own access control code before, and it took less time with cred to actually implement the thing than the "simple" mere configuration that other systems take. Really. It went so quickly and smoothly that I spent the time saved adding some nifty features that take advantage of these changes.


Tuesday, April 25, 2006

Python-esque Imports in JavaScript

programming :: javascript :: web



Caveat 1: It has been several years since I've been forced to work with
JavaScript. Caveat 2: This has either been done before and considered
stupid or done before and done better. If so, I'm sure I'll hear about
it :-)

And before I get started, I've been meaning to blog about
MochiKit. I've got all sorts of good
things to say about it, but I'm so busy using it that I'm not sure I'll
ever get around to it. Among the many reasons that people rave about
MochiKit, let me say that the first and foremost should be what the
MochiKit team has done most for JavaScript: coding convention. Even
though there are lots of stylistic inconsistencies in the various
MochiKit libraries, compared to JavaScript in the wild, it is a
completely unified whole. Wild JavaScript is generally pure shite.

Back to the topic: the project I am working on right now has multiple
"screens" that are loaded depending on user interaction. Pretty common
fare. But I really didn't want to load all the DOM manipulation stuff
in a series of *.js files on page load. There are WAY to many files for
this. So I wrote importLibrary().

My first attempt at writing this function was to append script tags to
HEAD with the proper source information in it. Which works...
eventually -- the js source just isn't available immediately. So here's
what I did:


req = getXMLHttpRequest();
req.open('GET', file, false);
req.send('');
js = req.responseText;
eval(js);


Now, as far as I know, there is no reason to object to the "eval()"
call here, since this is just the same thing (again, as far as I know)
as when your browser downloads the file by itself. This filename is not
parsed from a URL or derived from any human input.

In addition, I wrote a little function for parsing the import
parameters so that


importLibrary('MyProject.ThisSection.ThisScreen')

maps to a file available at an arbitrary (pre-determined per-project)
location off of docroot on the web server.




The import itself has to be done synchronously to ensure that JS code
is available to everything after the called to import. Right now, I
have it doing simple checking to see if the js file has already been
imported, and if so, skips it. I would like to add some kind of "use
queue" as well, where most frequently clicked screens are bumped to the
top (or bottom), and those least visited are pushed off the queue and
then not maintained in HEAD.

This has really enabled me to organize the code for the project in a
sane way.



Now playing:
Glenn Gould - The Art of Fugue, BWV 1080: Contrapunctus IX (a 4, alla Duodecima)

Thursday, April 20, 2006

The Joys of IRC

twisted :: python :: internet


And in particular, #twisted. Here's a great little story (and moral)
from Moshe Zadka:

[13:41]<moshez> dreid: did I tell you about the watchdog
[13:41]<dreid> moshez: no
[13:42]<moshez> dreid: oh, man it is awesome
[13:42]<moshez> dreid: ok, so it goes like that
[13:42]<moshez> dreid: there are a bunch of components
[13:42]<moshez> each component has any number of heart-beats
[13:43]<moshez> a heart-beat is a "heart" (opaque string) and a "beat"
(a number)
[13:43]<moshez> the heart-strings are sent, as UDP packets, to the EKG
port
[13:44]<moshez> the watchdog launches all the components, and then
watches the EKG
[13:44]<moshez> if any component has a heart which doesn't beat, it
starts fixing the problem
[13:44]<moshez> the first few times, it will shut down all components
and bring them back up
[13:44]<Tv> moshez: when does the dog eat the heart?
[13:45]<moshez> if it sees it needs to do that too many times, it will
decide "patient is dead" and reboot the system
[13:45]<moshez> if it sees it needs to reboot the system too many
times, it will decide "patient is stupid" and stop curing
efforts
[13:46]<dreid> hehe
[13:46]<dreid> i like that last part
[13:47]<foom> so stupid is worse than dead, eh?
[13:47]<dash> foom: there aren't any stories about voodoo curing stupid
[13:48]<moshez> foom: yes. you might be able to resuscitate, but you
can't cure stupidity


Now playing:
The Bothy Band - Julia Delaney

Sunday, April 02, 2006

Nevow + WYSIWYG Editors

nevow :: formal :: javascript




Due to pressing needs of a couple projects, I finally sat down today
and reviewed some of the JavaScript WYSIWYG textarea widgets/HTML
editors. Because of my exposure to Zope and Plone,
Kupu and
Epoz
were the ones that leapt most readily to mind. A quick google exposed
two others that I had forgotten about:
TinyMCE and
FCKEditor.

The nice thing about the latter two is the fact that they can more or
less be used with any web-based application. Kupu is striving for that,
but (as far as I can tell) it's still somewhat of a pain integrating
it. Tiny and FCK have options for integration via simple JavaScript
hooks. My preference is for Tiny, given that it loads WAY faster,
provides exactly what I need, has a code base orders of magnitude
smaller (only kidding a little bit), and has a cleaner look.



I'm a pretty big fan of the
formal
library (formerly know as "forms") by Matt Goodall of
pollenation.
I've used it a fair amount when building Nevow apps and the only
limitation I've personally run across involved trying to get it to work
with cred logins (probably *my* limitation). Today, I needed to add
support for WYSIWYG editors, and it was super easy. I wrote up some
examples on the wiki
here,
but it really just boils down to adding the JavaScript hooks in the
HTML "title" (using stan or *TML templates). The image attached to this
blog post is a screenshot of the "formal" example I put together to
demonstrate functionality. For the curious, here is my set of
TinyMCE toolbar customizations.



It really is the little things, though. Users of the Nevow apps where I
will be adding this are now ecstatic. And to be honest, I'm very stoked
too. It's almost as much fun as MochiKit, but that's for another blog
entry...


Saturday, March 25, 2006

Twisted JSON-RPC

My clients/partner companies use web services a great deal. To be honest, WS have made my life easier in many respects... but they can be a real pain. And implementations can be pretty lacking...

For reasons I won't get into (business, politics, and AJAX), I discovered today that I needed to convert an XML-RPC server to a JSON-RPC server. I looked at several implementations and they were either not general enough for my use, or they were horrible.

So, I hacked a copy of twisted.web.xmlrpc and turned it into jsonrpc using the simplejson library. Right now, it's doing JSON over HTTP, but I fully intend to write a TCP implementation as well. The problem, though, is this: as I was putting this together today, all I could think about was ways to make the code general enough to provide a common basis for use in implementing *-RPC. Ah, down that path lies madness. And it's one of those things you just can't avoid thinking about...

I'm currently writing some twisted.trial tests for it, but I also need to add some more stuff to make this generally useful (not to mention easier to debug). Hmmm, I'm actually really looking forward to doing a twisted TCP implementation of JSON-RPC. That should be fairly fast. And clean. Maybe.

And, of course, I'm sure I'll do a twisted.web2 implementation as well.

I have an amazing headache now, and need to get some food.

Update: All twisted.trial tests are passing and I am running a twisted JSON-RPC server now.

Update: For those wanting to use this code, please read the comments on this entry (dated May 5, 2006) and then see this post.

Technorati Tags: , , , , ,

Friday, March 24, 2006

Audio Fingerprinting

music :: technology



Here are a collection of links that I thought might be useful for folks
exploring audio finger printing technology. I've had this sitting
around in my drafts for a while, thinking I might do more with the
info, but... not. So here it is ;-)

MusicBrainz and TRM


http://blog.musicbrainz.org/archives/2005/10/acoustic_finger.html
http://blog.musicbrainz.org/archives/2005/09/general_update.html

Alternative to TRM


http://www.w140.com/audio/

MusicBrainz developing TRM Replacement



http://lists.musicbrainz.org/pipermail/musicbrainz-devel/2005-October/
001388.html

http://lists.musicbrainz.org/pipermail/musicbrainz-devel/2005-October/
001432.html
http://chatlogs.musicbrainz.org/2005/2005-10/2005-10-10.txt

http://lists.musicbrainz.org/pipermail/musicbrainz-devel/2005-October/
001440.html

http://lists.musicbrainz.org/pipermail/musicbrainz-devel/2005-October/
001447.html

http://lists.musicbrainz.org/pipermail/musicbrainz-devel/2005-October/
001452.html

http://lists.musicbrainz.org/pipermail/musicbrainz-devel/2005-October/
001466.html

http://lists.musicbrainz.org/pipermail/musicbrainz-devel/2005-October/
001478.html


Now playing:
Mannheim Steamroller - The Second Door

Wednesday, March 22, 2006

twisted.web2.xmlrpc How-To

twisted :: programming :: documentation



Last night, Sean Reifschneider and I sat together at
hacking society. He got a new
release of
vPostMaster
ready, and I started converting the
twisted.web.xmlrpc How-to to
twisted.web2.

It was pretty easy, really. The major difference in implementation is
that xmlrpc_*() methods now take the request object in their sigs.
Another minor change: we're dumping the .rpy stuff from the old docs,
and instead showing how to generally plug an XML-RPC resource into
twisted.web2 web server (i.e., other resources). It looks like dried's
http_auth branch
will soon be reviewed for inclusion in trunk, so we will be adding
support and docs for authentication in t.w2.xmlrpc as well.


Saturday, March 18, 2006

Clay Shirky at ETech

Tim O'Reilly (at least, I think it's Tim) took some notes during Clay Shirky's talk at ETech. Here's one of my favorites:
Social software is the experimental wing of political philsophy, a
discipline that doesn't realize it has an experimental wing. We are
literally encoding the principles of freedom of speech and freedom of
expression in our tools. We need to have conversations about the
explicit goals of what it is that we're supporting and what we are
trying to do, because that conversation matters. Because we have
short-term goals and the cliff-face of annoyance comes in quickly when
we let users talk to each other. But we also need to get it right in
the long term because society needs us to get it right. I think having
the language to talk about this is the right place to start.

Sunday, March 12, 2006

pymon Shell Sprint Overview

pycon :: pymon :: twisted


Well, this is about a week late, but I've been sick as a dog for that
same period, so perhaps I can be forgiven ;-) Having been gone from
Colorado for two weeks (the most extended absence since I moved here),
I have to say that Evelyn Mitchell is right: there is *nothing* like
the water here at the foothills of the Rockies. Nothing. I couldn't
believe it, actually. I had to keep drinking to make sure it wasn't
some strange mental effect. After a week, I'm finally getting used to
the pure goodness of it again, but I will rest very happy knowing how
amazing the water is that I take in every day. Oh, and the air too.
When I stepped off the plane in Denver, I wanted to kiss the ground.
The air was a flood of clean sanity in an insanely polluted world.


Back to the sprint summary. Right before I packed up to leave on the
last day of the sprints, I sent a similar message to the pymon mail
list. With a few edits, here it is again.

What's done? Architecture.


  • initial coding for second round of grammar

  • initial tie-ins for higher-level application logic

  • shellparser.Grammar() has moved out of the sandbox and into it's
    own module at pymon.grammar.Grammar()

  • shellparser.ShellParser() has moved out of the sandbox and into
    pymon.parser.Shell()

What needs to be done? A little architecture touch-up and fairly
straight-forward implementation.


  • refine the grammar (introspection?) so that we can at least do
    auto-completion with tab

  • add extensive docstrings to the classes and methods for use with
    "help"

  • refine grammar to get "help" for various commands (tied closely to
    the previous two bullets)

  • add support to ZConfig schema and pymon.conf for named nodes (named
    services?) -- very easy

  • querying in-memory configuration -- very easy

  • querying in-memory service status -- a little more involved, but
    not bad

To finish the move out of the sandbox and into the lib, the following
has to happen:


  • shell.tac gets *completely* rewritten as Shell() or
    LineReceiverShell() in pymon.services() with shell-specific
    configuration added to schema.xml/pymon.conf and then plugged into
    pymon.tac.

The grammar and shell parsing seem to be very flexible, so we should be
able to use them with services like XML-RPC, jabber, and email with few
or no changes/additions.


Wednesday, March 01, 2006

More Schrodinger's Box

webxcreta :: software :: weblogs


This is the second month of operation for the
Schrodinger's
Box
experiment. It's quite fun, and I am enjoying the end-of-month
self-referencing posts. It's quite a blast to see whacky themes emerge
from the grammatical averages. I'll probably be generating posts from
the previous ones for another day or so, and then the crunching on the
top 500 blogs will resume...



pymon Sprinting at PyCon 2006

pycon :: pymon :: networking :: programming



For the past two days, I've been working on a Cisco-inspired shell for
pymon. The purpose being: provide a user interface familiar to network
engineers, allowing them to update/control a running pymon instance.
Ravi and I worked on the shell last year, with Ravi writing a
pyparsing
grammar. However, since then, the specs have evolved considerably: they
are now much better defined and understood. As a result of these
changes, I had to chuck the old code and start over. Which isn't a bad
thing, especially since we didn't get very far. Things have gone very
well and I am excited about the direction and rapid movement.

Here's the current status of the current pyparsing grammar work:

complete


  • node (add, update, del, etc.)

  • memory (write, clear)

  • show (nodes, services, lists)

almost done


  • services (add, update, del, etc.)

not started


  • lists (email/notification list management)

During lunch today,
Sean
Reifschneider
from
tummy.com
had some great input and had the following requests:


  • acknowledgments via web gui (and maybe also email? irc?)

  • scheduled downtime (existing feature)

  • dependencies (e.g., if the router is down, only notify about the
    router and not all the hosts behind the router) -- because of they way
    I have designed the rules engine, I don't think this will be too
    difficult... and I may even be able to use the NetCIDR package I wrote
    for CoyMon to do this very cleanly

  • read-only SNMP (gets for remote load average and disk usage); this
    should be fairly trivial. We might be able to push this out with the
    next release. It will just use the local agent, like ping does

  • escalations (partially implemented)

  • zsh-like functionality in pymon shell (tab-completion at least)

This gives the project some nice near-term focus which we will be
working on once I restructure the factories and clients and add unit
tests for twisted.trial.



Tuesday, February 28, 2006

Unit Tests: No really, you HAVE to use them

twisted :: programming :: testing


So, for the pycon sprint this year, I figured I'd work virtually
with some other developers who couldn't make it this year. There has
been some interest and requests on the pymon mail lists... I strongly
believe that patience and loyalty should be rewarded, and these guys
have put up with an awful lot of delays and bugs... and circumstances
beyond anyone's control.

The first thing I wanted to do was get a usable release candidate
available for them to download. After digging back into the code, I
realized that the best way to provide this was by patching up the
already-extant pymon RC2. With RC2 operable, I tried fixing some bugs
for later code changes, and I couldn't. The code was a mess. But not
anything most people would ever notice or even care about.

Here's what I mean: in the last big merge, we brought in all the
twisted code, and then proceeded to clean that up... but people wanted
to use it soon, and writing tests slowed us down, so we skipped it. It
is debatable whether that was the right thing to do, since it is really
important to weigh your user base when making such decisions. Being a
purist at heart, I'm really wishing it was possible to have pleased the
users and take the time needed to add the tests. Regardless, I have to
add them now. The project simply will not progress without it. I cannot
currently debug effectively without them.

But there is something more important here, the REAL reason I need to
add unit tests. When I tried to add them yesterday, I realized my code,
as it was, simply wouldn't allow it. This means my code is bad. Plain
and simple. It means that there really aren't "units" to test, and that
means the code is not well-enough thought out. After looking at the
problematic code yesterday, it appears pymon will be a good candidate
for componentization. This will:


  1. Provide clear units than will be much more easily testable as
    units

  2. Clean the code up considerably

  3. Come more in line with the standards and best practices of the core
    twisted code

By adding unit tests early on in a project's life, you are giving
yourself the chance to benefit from a design litmus test. If your
twisted code lends itself more or less easily to twisted.trial, then
there is also a good chance that you have something that is workably
modular. For me, this will mean that I can accept all sorts of feature
additions and increased application sophistication without affecting
the quality, readability, or testability of the code. The code will
remain clean and simple. There will just be an increased number of
components.

Now playing:
Yes - No Opportunity Necessary, No Experience 
Needed



Back to Subversion

version control :: software



Well, I experienced a spectacular failure when it came to distributing
darcs to multiple servers, and then pushing/pulling changes. I am
perfectly willing to accept that it was me, with a limited
understanding of darcs methodologies and an unwillingness to see the
problem through. But I just couldn't bring myself to loose two days on
it -- in the middle of projects with deadlines, one is enough, thank
you.


So, back to subversion it is.



The nice thing, though, is that a few days before the move to darcs, I
had read the
Combinator
wiki entry
on the
Divmod trac
as well as the
branch-based
development
entry there. This is really good stuff and offers the
cleanliness I like to see in software development process. Though this
is not quite patch-based development, it certainly fits with my
experience and intuition. (I didn't work with darcs enough to really
say whether the patch-based approach fit my brain.)



The other thing is distributed content. Since I didn't have any issues
with read-only pulls from darcs repositories, this is what I am
considering: every three hours, I will run my tailor scripts against
the latest svn changes (trunk only), and distribute these via rsync to
other servers where the code will be accessible via http and darcs http
checkout.


But all this will have to wait until the sprinting at pycon is
finished...



Saturday, February 25, 2006

twisted.web2.xmlrpc

twisted :: programming



I had a chance to do some paired programming with Itamar
Shtull-Trauring today. He was very patient with me, and I learned a
great deal just observing how he considered the problems at hand. I
will be processing what he shared and the implications of his approach
for several weeks. At least. It was a great experience, and I am
thrilled not only by what I learned but am genuinely excited about
digging deeper into twisted.web2 and HTTP in general. (But not because
HTTP is so great... more like it's a beast I want to beat now. Beat
senseless.)

Also, the company that Itamar
(ITA)
works for is doing some really awesome stuff. One could naively say
that they are doing "airline reservation searches", but it's so, so
much more than that. They are focused on a really interesting and
diverse problem space. Think about it: if they are keeping guys like
Itamar and Michael Salib interested and enthusiastic, there's
definitely more than meets the eye, and it's got to be interesting :-)
They are also looking for motivated, talented developers who would be
willing to live in the Boston area. Check out the site and then contact
them if you are interested. They've got new and significant investment
interest (real $), new offices, and big plans. If I could work remotely
for them, I'd do it in a second... alas, I can't leave my beloved
Rockies ;-)



Zope 3 Components

zope 3 :: twisted :: programming


I'm filing this under twisted as well as zope 3 because this is exactly
how I am going to use Zope 3 components. I was pretty excited/relieved
yesterday listening to Jim Fulton say that one of his top priorities is
working on the finer-grained architecture of Zope 3 such that
separation of components is possible. I would be using much more of z3
if I could use them as independent components right now.



Tuesday, February 21, 2006

Twisted Games


Every year or so, I search online for the latest CP/M emulators and the source code for my favorite childhood games: Star Traders (also Star Lanes), Ladder, Catchum, Hunt the Wumpus, Star Trek, Adventure, etc. These and others were either ASCII "graphics" or just text-only. Like a vulture waiting for the all-clear, I would regularly circle the den where my dad used the Kaypro II we had, and would "sneak" in as soon as it was available. So many fond memories. So fond, in fact, that when I was in school studying phsyics, I bought an old Kaypro on eBay as a fixer-upper. I still have it... but I haven't fixed her up yet ;-) (I got it booting, but I need to fix the wiring in the keyboard).

Recently on a private mail list, I was reminiscing with friends how I started programming as a result of these games: finishing them and not wanting them to end, so I'd write extensions for them and then play those. After that email, I went on my annual google binge, and this year's was the most successful yet. Here's some good stuff I found:

For those unfamiliar with the classics, here are the Linux ports of the famous BSD games -- if you're running Ubuntu or Debian, just do apt-get install bsdgames and enjoy :-)

Last August, JP Calderone (exarkun) blogged about some test code of twisted.conch.insults he was writing for potential twisted terminal-based apps. I only saw this recently, and it got me to thinking about my favorite old games:

  1. I could fairly easily port them, and
  2. some of them might do very well as multiplayer apps.

Ah, the thought of that was pure heaven: old friends, playing old games over the network, reminiscing and reliving old times... and doing it all together. I think my nostalgia circuits over-loaded, because I just sat there for a while with a wistful grin on my face... it could have been for hours, but I'll never say...

So, yet another set of projects has been initiated, and I am having an immense amount of fun -- nay, joy! -- working on them. I love the terminal. I live it. I wish all applications were terminal based, but it's a rare opportunity to have a project with those requirements. This is a wonderful childhood reprise :-)

As with my other projects, I will post updates here on the progress of the Twisted games I'm porting.

Saturday, February 18, 2006

accountdb

This week I made some great progress on the accountdb twisted/nevow micro app. The prototype is finished, complete with screenshots. One of the nice things that happened while writing it was a personal issue resolution :-)

And, yes, as a programmer, most of my personal issues are with software and software implementation or design.

Here's the issue: I think Nevow's stan is fantastic and ingenious.  However, I tend to adhere pretty strictly to the principle that HTML doesn't belong in code, and should remain in templates. For that  reason, I even try to steer away from programmatically generated HTML.

Two things let me relax this position: 1) stan is *very* clean, and 2) when you have lots of "mini-templates", having them all in separate files *really* starts to affect performance negatively, especially if
you're creating objects to provide an interface for them.

This work has given me some nice insight on how I can optimize imagedb as well. I'm looking forward to that, since I've been using imagedb a lot more, lately.

Oh, I almost forgot: sales pitch for accountdb: this micro app is, as you can guess, responsible for storing users and providing web authentication as a service. Don't worry, there will be many more blog entries on this in the future, and I will describe how this is done and why this is so useful :-)

But for now, back to coding!


Tuesday, February 14, 2006

Subversion to Darcs

version control :: software


I have *finally* made the move now. I've been experimenting with
darcs for about 6 months now, on and off. At the hackathon in December
(sponsored by tummy.com), I was really into it and ran tons of tests. I
used tailor to convert several repositories, etc. I really loved the
patch approach that darcs uses, but I just couldn't motivate to migrate
the 47 repositories that I have in svn.

However, I have recently migrated all of my Subversion repositories
from one server to another, and it was the perfect opportunity. I spent
two days migrating to darcs and testing the results, and am delighted.
The scripts that I set up to do all the work can now be run easily at
any time (if the need arises for any reason). With darcs, the projects'
source code is now easily available for anonymous checkout, and what's
more, easy mirroring.

I have hacked trac to point to the darcsweb cgi, but I'm still pushing
darcs partials from the main server out to the first mirror... so there
will be some 404s for a little while, but I expect by the weekend to
have all repos viewable in darcsweb as well as available for anonymous
checkout. I like the look and functionality of darcsweb, though it is
slow. However, I find the query parameters obnoxious; I'll be rewriting
it as a stand-alone Nevow app...


Now playing:
Yes - It Can Happen

Sunday, February 12, 2006

Nevow Radio

nevow :: twisted :: music


As *everyone* knows, PyCon is just around the corner. I'm quite
looking forward to it, since I missed last year's. Not only that, but
I'll be visiting a good friend whom I haven't seen in 10 years.
However, my housemate is getting nervous -- she relies upon me
completely for all things technological. With respect to my forthcoming
two-week absence, the biggest source of her concern has been music.



Over the past 9 years, I have migrated all of my Audio CDs to hard
drive and then to to MP3 CDs/DVDs. However, this trend has slowly
undergone a partial reverse as my ancient file server (running a fresh
Ubuntu) has found new purpose in life: serving MP3s from it copious
free space. I also happen to be the only one around who understands and
can operate such things. My housemate really doesn't want to be without
music for two weeks.

Not wanting to leave my friend in the lurch during the conference and
sprints, I whipped up some python code to make it all better. First, I
wrote a little script that parses the iTunes Library XML file. This
allows me to now generate MuSE-readable playlist files for all of my
iTunes playlists with a single script (it seems that iTunes won't let
you batch-export all your playlists). With these generated, I wrote a
quick little Nevow app that globs these new playlist files and
interacts with MuSE (I wanted to run icecast, but it seems to only work
with ogg). All you have to do is select the desired playlist, hit
submit, and then open up the stream in iTunes. Selecting a new playlist
in the Nevow interface reloads MuSE and immediately starts streaming
the new choice.

This was absurdly easy to put together. And it's only two files: one
for the playlist generator, and an all-in-one .tac file for Nevow. It's
even got a virtual host monster plugged in, on the off chance that I
want to put this app on its own little local domain. It's little things
like this that really confuses me: what's the big deal about people
freaking out with Twisted and Nevow? This stuff is brilliant,
well-designed, elegant and makes perfect sense. When I read Guido's
comments about how Nevow/Twisted scares him, I was at a complete loss.
It can only mean that he (and others with the same views) simply
haven't taken the time to site down and actually *use* the stuff.

C'mon, guys -- tune in, already!


Now playing:
Yann Tiersen - Soir de fête

Sunday, January 29, 2006

NetCIDR 0.1 Released

networking :: python :: programming


As part of the on-going work with
CoyoteMonitoring,
NetCIDR
was written to allow for a clean and logical approach when analyzing
NetFlow
captures. Primarily, it's use is for determining whether a given IP
address is in a given netblock or a collection of netblocks. Here are
some quick example usages from the wiki and doctests:


>>> CIDR('10.4.1.2')
10.4.1.2
>>> CIDR('10.4.1.x')
10.4.1.0/24
>>> CIDR('10.*.*.*')
10.0.0.0/8
>>> CIDR('172.16.4.28/27')
172.16.4.28/27
>>> CIDR('172.16.4.28/27').getHostCount()
32

Here's how you create a collection of networks:


>>> net_cidr = CIDR('192.168.4.0/24')
>>> corp_cidr = CIDR('10.5.0.0/16')
>>> vpn_cidr = CIDR('172.16.9.5/27')
>>> mynets = Networks([net_cidr, corp_cidr])
>>> mynets.append(vpn_cidr)

And now, you can check for the presence of hosts in various networks
and/or collections of networks:


>>> home_router = CIDR('192.168.4.1')
>>> laptop1 = CIDR('192.168.4.100')
>>> webserver = CIDR('10.5.10.10')
>>> laptop2 = CIDR('172.16.9.17')
>>> laptop3 = CIDR('172.16.5.17')
>>> google = CIDR('64.233.187.99')

>>> home_router in mynets
True
>>> laptop1 in mynets
True
>>> webserver in mynets
True
>>> laptop2 in mynets
True
>>> laptop3 in mynets
False
>>> google in mynets
False


Saturday, January 28, 2006

CoyoteMonitoring 3

networking :: management :: software


For the past year, I have resisted speculation and development on
CoyMon 3. But in the past few months, many pieces have begun falling
into place -- each removing objections and concerns, and some providing
a clear path of development where before there was none.

CoyoteMonitoring
is a free, open-source network management tool. Specifically, it glues
together many open-source and difficult-to-configure subsystems for
obtaining and viewing
NetFlow
data used in network analysis. It is comparable to operating system
distributions in this regard: it is a NetFlow management distro. The
Department of Veterans affairs has actually been using CoyMon 2 for
about a year now, with astounding results. We are thrilled with the
success it has been for them, with the tremendous time and money it has
saved them.

The development of CoyMon 2 was sponsored in part by the VA, and due to
the timeline, much of the code was specific to their needs. However,
because of the time it would take to audit the code, CoyMon 2 has never
been publicly released. I have had many email conversations with WAN
managers and network engineerings very distressed at this. They too
have found a sad lack in the open source world when it comes to useful,
easy-to-use tools for querying, viewing, and analyzing NetFlow data.
Because of this continued interest, input, and moral support from these
hard-working individuals, I have been considering how this need could
best be addressed and met.

To be honest, my biggest technical concern has been with the current
NetFlow tools. The perl code that people have depended upon for this
(since the late 90s!) is krufty, hard to maintain, overly uses
almost-unnecessary and out-of-date perl modules, and as a group, were
not designed with maximal extensibility in mind. They have been genuine
gems in the field -- none of us could have done what we have done for
the past several years without this wonderful contribution. The
combination of these (and their dependencies!) with all the other
pieces that require configuration and glue is a difficult thing to
provide. It took an ENORMOUS amount of time and energy to get CoyMon 2
into a place where it could be deployed efficiently. CoyMon 2 runs like
a tank, though. It's a real champ and a testament to all that hard work
and organization.

However, it is time to fix the system at its roots. Enter CoyMon 3.

The biggest stumbling block to forward movement on CoyMon 3 has been
the available APIs for to tools upon which we depend, with one of the
most important being RRDTool. The python bindings for RRDTool are
archaic and decidedly non-OOP. Attempting to design a system that is
robust, effortless to maintain, and easily adapts new features but that
has composite components with difficult APIs is a recipe for
frustration, delay, and ultimately, non-delivery. The first step
towards addressing this problem came with the recent release of
PyRRD.
It is a fully OOP wrapper for RRDTool in python that removes this pain
from the equation.

The next biggest hurdle was the old perl code called FlowScan. After
much discussion and analysis, a clean and elegant way to provide this
functionality was arrived at, and we will soon have a new product to
show for it, freely available for download. CoyoteMonitoring will
depend heavily on this piece of software and related libraries. We are
making excellent progress on this, but ultimately, the problem to solve
here is one of modularity and configuration. How do you provide an
easy, non-programmatic method of extensibility for any number of
potential rules? We've got some great ideas, but only the natural
selection of actual use to prove the best approach. This is our current
top-priority.

With the advances that have been made in the past several months, we
are not only comfortable making an announcement about a new version of
CoyoteMonitoring, we are down-right confident :-) For the interested,
here are more detailed points on CoyMon 3:

  • CoyMon 3 will be developed completely independently of any
    third-parties or businesses. This will mean slower development times,
    but cleaner more easily managed code. With the absence of sensitive
    customer code and/or configurations, you will see regularly available
    downloads.

  • Supporting libraries will all have extensive unit tests for each
    piece of functionality.

  • CoyMon 3 will have a 100% true component architecture.

  • CoyMon 3 will continue to use flow-tools and will make full use of
    the python bindings for fast processing of NetFlow captures.

  • CoyMon 3 will make use of Zope 3 technology for through-the-web
    management of such resources as collectors, protocols, queries,
    resulting data and graphics, as well as arbitrary content that is
    important to end users.

  • CoyMon 3 will make use of the Twisted Python framework for all of
    its specialized networking needs, including (but not limited to) CMOB/X
    (a recently developed "object broker" for distributed collectors
    managed at a central location).

  • CoyMon 3 will have completely re-factored supporting libraries,
    written and maintained by the CoyoteMonnitoring community. All the old
    Perl code will be replaced with light-weight, easy-to-maintain python
    libraries and scripts. These will include NetCIDR, PyRRD, and
    PyFlowCatch.

  • CoyMon 3 will have consistent configuration across all its
    composite applications and it will make use of the famous, the useful,
    and the ever-concise Apache-style configuration files.

  • And last, but not least, CoyMon 3 will abide by Chapter 4 of Eric
    Raymond's "The The Cathedral and the Bazaar": Release Early, Release
    Often. CoyMon 3 development snapshots will be available for download
    regularly.

Project spaces to keep your eyes on:

CoyMon

PyFlowCatch

PyRRD

NetCIDR


Wednesday, January 25, 2006

The Future of Content Management

So this morning I read a great  post on Paul Everitt's blog. He gives a quick run-down on a comprehensive paper by Seth Gottlieb, who I hadn't heard of but have since become very impressed by. Paul provides an excellent quote from the paper, and then makes this comment himself:
Enterprise CMS, and most WCM, is organized like a mainframe. Everything
is in one system and you bring the users to that system. Federated
content management might be a growth opportunity in the market.
I proceeded to post a comment on his blog to the effect of my support and long-standing enthusiasm for this paradigm shift. At the risk of harping on this theme, this is really what's behind the post Dinosaurs
and Mammals
and more recently, The King is dead! Long live the Kinglets!. I also said that I am patiently waiting for the day when the z3 libraries are available as a programming content managing framework, when I will be able to easily integrate z3 content management components into my twisted applications.

While checking out Seth's blog, I came across several awesome posts where he discusses much the same thing, from different perspectives:
My focus is more general than just content management, but let's face it: most of what we need to do on the network these days revolves around content. Between Paul and Seth, I feel very validated for the past two years of exploration and code I have been developing and am encouraged to continue along these
lines :-)


PyRRD 0.1 on cheeseshop.python.org

python :: software



Great news! We've released the first version of PyRRD -- you can get it
here:


http://
cheeseshop.python.org/pypi/PyRRD/


There are currently
four
examples of RRD-generation with python code

up on the project site. Be sure to check them out and send me your
questions so I can improve the docs and the code. My goal is to make
RRD easy to use for python programmers... but I can only do so much
with just my own mind and perspective ;-)



PyRRD currently has all of the features we need to do the development
we are focused on. TODOs have been stubbed out in the code were some of
the lesser known and used features of RRD haven't yet been implemented
in this OOP API. As development for the next version of
CoyoteMonitoring
kicks into gear, I will be adding more of the obscure stuff to PyRRD.

Enjoy!


Sunday, January 22, 2006

PyRRD

Keeping alive the whimsy that had me pick webXcreta back up and get it working, I have returned to another old project: PyRRD. I had started work on this a couple years ago while adding functionality to CoyoteMonitoring, but had to put it on hold due to budget constraints.

In a nutshell, PyRRD is an object oriented interface (wrapper) for the RRDTool. python bindings (rrdtool). Where when using rrdtool you might see something like this:


rrdtool.graph(path,
'--imgformat', 'PNG',
'--width', '540',
'--height', '100',
'--start', "-%i" % YEAR,
'--end', "-1",
'--vertical-label', 'Downloads/Day',
'--title', 'Annual downloads',
'--lower-limit', '0',
'DEF:downloads=downloads.rrd:downloads:AVERAGE',
'AREA:downloads#990033:Downloads')

with PyRRD, you have this:


def1 = graph.GraphDefinition(vname='downloads', rrdfile='downloads.rrd',
ds_name='downloads', cdef='AVERAGE')

area1 = graph.GraphArea(value=def1.vname, color="#990033',
legend='Downloads', stack=True)

g = graph.Graph(path, imgformat='PNG', width=540, height=100,
start="-%i" % YEAR, end=-1, vertical_label='Downloads/Day',
title='Annual downloads', lower_limit=0)
g.data.append(def1)
g.data.append(area1)
g.write()

Optionally, you can use attributes (in combination with or to exclusion of parameters):


def1 = graph.GraphDefinition()
def1.vname='downloads'
def1.rrdfile='downloads.rrd'
def1.ds_name='downloads'
def1.cdef='AVERAGE'

And there are aliases for the classes so that you may use the more familiar names from RRDTool:


def = graph.DEF(vname='downloads', rrdfile='downloads.rrd',
ds_name='downloads', cdef='AVERAGE')
area = graph.AREA(value=def.vname, color="#990033', stack=True)

Not only is this object approach more aesthetically pleasing to me, but the interface is much easier to manipulate programmatically. That's insanely important to me because of how I use RRDTool in other projects. I do a great deal of data manipulation and graph representation, and using the regular RRDTool python bindings is simply a show-stopper.

Another neat thing about this wrapper is that the classes use __repr__() to present the complete value of the object as an RRDTool-ready string. This means that you are not limited to using the python bindings, but can also freely and easily interact with the command line tools, configuration values in files, etc.

When I've got a first release ready to go, I'll push it up to CheeseShop and post a notice here on the blog.

Update: PyRRD is now available in Debian and Ubuntu.

ctypes on Mac OS X 10.4 with gcc 4.0

python :: programming :: macosx



I had some trouble installing ctypes on a 10.4 server tonight, and found
this
little post

that gave a patch for it. Unfortunately, this was for the file in 2004
and I had to manually edit the current 4000+ line file.

Apparently, this was in CVS in 2004 but hasn't made it into a distro
since...

For the search engines, I will post the error message (where xxx are
your line numbers):


source/_ctypes.c:xxx: error: static declaration of 'Pointer_Type'
follows non-static declaration
source/ctypes.h:xxx: error: previous declaration of 'Pointer_Type' was
here

And here's a copy of my diff against _ctypes.c in ctypes 0.9.6:


--- third-party/ctypes-0.9.6/source/_ctypes.c (revision 53)
+++ third-party/ctypes-0.9.6/source/_ctypes.c (working copy)
@@ -2449,7 +2449,7 @@
"sO|O" - function name, dll object (with an integer handle)
"is|O" - vtable index, method name, creates callable calling COM vtbl
*/
-static PyObject *
+PyObject *
CFuncPtr_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
{
CFuncPtrObject *self;
@@ -3880,7 +3880,7 @@
(inquiry)Pointer_nonzero, /* nb_nonzero */
};

-static PyTypeObject Pointer_Type = {
+PyTypeObject Pointer_Type = {
PyObject_HEAD_INIT(NULL)
0,
"_ctypes._Pointer",


Friday, January 20, 2006

The Self-Referenceing Cat

software :: python :: webxcreta


I'm thinking of modifying the title- and text-generating of webXcreta
in its generation of posts to
Schrodinger's Box.
Here's what may be cooking:

  • Get all the content from the top 500 blogs like I currently am, and
    keep using the current weighting algorithm

  • But, also get all the content for every post to Schrodinger's Box

  • Have the content from Schrodinger's Box contribute towards a
    significant fraction of the material for each new post

  • And give posts with more comments greater weights

  • And! Include the comments from those posts as part of the source
    material from which new posts are created (but weighted considerably
    less than full posts)

Thus making Schrodinger's Box self-referential... simulating theme and
continuity. I think this would be very interesting, allowing
Schrodinger's Box to "gain momentum" as it were.


Update: Schrödinger's Böx is now occasionally generating posts based solely on the content of previous posts.

Monday, January 16, 2006

Psychohistory

natural language :: semantics :: science fiction


I am delighted with the silliness issuing forth from
Schrodinger's Box, it
continues to amuse and I am always eager to see what the next post will
be. However, as I alluded to at the end of my previous post, there is
more potential here than satisfying the high-minded call of absurdity.

For example, imagine yourself at work, trapped in a project with a
bunch of tired people just wanting to go home. You've been tasked by
the boss with "thinking outside the box" and you're making no progress.
Everyone is grumpy, creativity is harder and harder to imagine. Simply
gather a bunch of text for the topic at hand, feed it into webXcreta,
and viola -- instant brainstorming material. Proceed to discuss the
sentences that webXcreta pops out, easily dig yourself out of the rut,
get off the hook with the boss, and run home to pursue the many other
distractions of a mundane existence.

This could also be used effectively to alleviate writer's block. You're
writing a historical romance in ancient Gaul, with a plot that
stretches over Teutonic tribes in Western Europe, through to Rome and
into Asia Minor? Well, gather some source material, feed it into
webXcreta, and bing-bango! Innumerable ideas and sources of inspiration
to get that ink flowing again.

So, yes -- there is some practicality involved. On a slightly more
radical note, I'm exploring as possible use of the weighting
"algorithm" I used in webXcreta for representation of minorities. The
square of the log could be a very effective means of ensuring that no
voice is completely suppressed, that no majority ever gains absolute
control. I'd like to hear what people with political science
backgrounds have to say about that sort of thing.

And then there's the potential role for this to be used in assessing
public opinion, popular trends, and predictive analysis. Now we're get
to the subject of this blog entry... psychohistory ;-) I'm talking
Isaac Asimov and science-fiction: the psychohistory of Hari Seldon in
the famous Foundation series. Or at least a part of it. webXcreta makes
use of the
Natural Language Processing Toolkit which is a great
tool, but we'd need something more to make this science-fiction a
reality. We'd need a "semantic processing toolkit". I image that the
corpora for such a toolkit would not be tagged parts of speech, etc.,
but rather semantic tags. Perhaps domain-specific tags for contextual
meaning. Then, instead of a grammatical average, you would take a
semantic average. Now *that* would be REALLY interesting...


Now playing:
Yes - And You And I (live version)

Sunday, January 15, 2006

Crazy Truth

software :: python :: webxcreta


I recently emailed a friend about the webXcreta project, and to give
him some background on it, I described Eigenradio:

A few years ago, I came across one of the most bizarre software
projects I had seen on the net: a guy at MIT had created an internet
radio station that "consumed" Top-40 songs being played live on other
internet radio stations, pushed them through massive statistical
commutation and custom software, and then spit out "new" music from
these computations. The music generated was a sonic, statistical
average of what was popular and getting air-play. Most people found the
resulting "music" disturbing. I, however, couldn't stop laughing --
literally. You could actually "hear" the statistics of the thing, if
you listened carefully. It was stunning. And hilarious.

That bit about hearing the statistics is key. And it perfectly
describes how it felt to listen to that music. It felt like an
epiphany. One of Eigenradio's taglines was brilliant:


What you hear on Eigenradio is the best of the New Music, distilled and
de-correlated. One song on Eigenradio is worth at least twenty songs on
old radio.

Now, with webXcreta, I find myself in a similar situation: I read the
posts, and I convulse with laughter. It's not the content so much that
makes it so irrepressibly funny to me, but rather what's under the
covers. To give you a quick sense of my humor, I laugh at truth. Truth
is endlessly amusing to me. I remember reading James Gleik's "Chaos"
book in high school, and laughing for about 15 minutes after I read his
description of Sierpinski Gaskets: they have zero area and infinite
points. It wasn't so much like a light going off in my mind, as a
bomb. The truth of it turned my mind upside down, and I had new eyes.
It was an ecstatic experience -- thus the laughter.

There's something similar happening in my mind with reading theses
Schrodinger's Box
blog posts. There's something hidden, under the covers that is the
true source of my laughter; the quote above from the Eigenradio site
points to the answer. To explore this further, consider this: what if
you absolutely had to read 1000 pages of text in less than a minute,
what would you do? What's the cheapest alternative to a
massive/complete data set? A random sampling! Read a shotgun-spread of
statistically sampled textual data from those 1000 pages.

And that's it. That's what's making me laugh. When I read
Schrodinger's Box,
part of my mind is actually aware that it is seeing parts of thousands
of data sources simultaneously, and the truth of that inspires a
quasi-ecstatic hilarity. Crazy truth.

In my experience (and, arguably that of the entire world of science),
Crazy Truth is a gold-mine for discovery. It will be interesting to see
how this code evolves and what strange uses it gets put to...


Now playing:
Yes - Close to the Edge