Tuesday, April 23, 2013

OpenStack Developer Summit: Heat Followup

Folks are finally starting to recover from the OpenStack Developer Summit that was held in Portland, Oregon recently. All reports indicate that it was a truly phenomenal experience, record-breaking in many ways, and something that has inspired incredible enthusiasm within the community. And that's great news, since there's an enormous amount of work to be done this release ;-)

Of particular importance to many in the community is the work around maturing the autoscaling feature in OpenStack Heat. There was a fantastic session at the summit, facilitated by the bow-tied and most dapper Ken Wronkiewicz (his notes from the Summit were published on the Rackspace blog).

In preparation for the session, the following resources were created:
That one in the middle is important, as it is also where notes were taken during the actual session itself (see the section entitled "ODS Session Notes"). Devs at Rackspace have started going through the notes from the session and started planning work around this -- all of which will be carried on in the open, on the OpenStack mail list (tagged with "[Heat]"), on Freenode, and on github/gerrit.

The discussion at the Summit indicated strong interest in building a REST API for the existing autoscaling feature. Needless to say, there is a lot involved in this, touching upon significant OpenStack components like Quantum, LBaaS, and Ceilometer. Once the appropriate code is in place, a REST API will need to be created, features will need to be expanded/added, etc., and we'll be off and running :-)

Lots to do, and lots of great energy and excitement around this to keep us all chugging through this cycle.

On that note, we'd like to send out a special "thanks" to all the countless folks who worked so hard to make ODS happen. This event anchors us in a most excellent way, providing the insight and fuel that supports future development work so well!


Tea Time at Rackspace SF

One of the Rackspace teams here in the SF office doesn't to standups; it does "tea time" instead. A delightful change, to be sure. One of my coworkers published a Rackspace blog post about this today.

More than a gratutious reblog, I wanted to highlight this bit of lovely coincidence I came across while reading Turing's Cathedral last night:
"Afternoon tea— a ritual introduced at Fine Hall by Oswald Veblen, who, according to Herman Goldstine, “tried awfully hard to be an Englishman”— was served on real china daily at exactly three o’clock. According to Oppenheimer, “tea is where we explain to each other what we do not understand.”

Dyson, George (2012-03-06). Turing's Cathedral: The Origins of the Digital Universe (p. 90). Knopf Doubleday Publishing Group. Kindle Edition. 
The part of the book where this quote occurred was discussing Fine Hall which was built for Von Neumann's computer team at the Princeton Institute for Advanced Studies.

And, yes, tea time is a great time for our team mates to explain to each other what they don't understand.

In case you missed it above, here's the link again:
http://www.rackspace.com/blog/tea-time-because-we-wont-stand-for-a-standup/

Thursday, April 18, 2013

Cruising HTTP with LFE

In the last post, you learned how to get LFE running on Ubuntu. This one will give you some insight into how LFE can be used in something approaching real-world problems. In the next post, we're going to jump back into the lambda calculus, and we'll see some more LFE shortly after that.

Because Lisp Flavored Erlang is 100% compatible with Erlang Core, it has access to all the Erlang libraries, OTP, and many third-party modules, etc.  Naturally, this includes the Erlang HTTP client, httpc. Today we're going to be taking a look at how to use httpc from LFE. Do note, however that this post is only going to provide a taste, just enough to give you a sense of the flavor, as it were.

If you would like more details, be sure to not only give the official docs a thorough reading, but to take a look at the HTTP Client section of the inets Reference Manual.

Note that for the returned values below, I elide large data structures. If you run them in the LFE REPL yourself, you can view them in all of the line-consuming glory.

Synchronous GET

Let's get started with a simple example. The first thing we need to do is start the inets application. With that done, we'll then be able to make client requests:
Now we can perform an HTTP GET: This just makes a straight-forward HTTP request (defaults to GET) and returns a bunch of associated data:
  • HTTP version
  • status code
  • reason phrase
  • headers
  • body
All of that data is dumped into our result variable. Here's the same GET but with pattern matching set up so that we can easily access all that data:
For those not familiar with Erlang patterns, we've just told LFE the following:
  • the return value of the function we're going to call is going to be a tuple composed of an atom ('ok) and another tuple
  • the nested tuple is going to be composed of a tuple, some headers, and a body
  • the next nested tuple is going to be composed of the HTTP version, status code, and status code phrase
If you'd like to learn more about using patterns in LFE, be sure to view the patterns page of the LFE User Guide.

Once the request returns, we can check out the variables we set in the pattern:

That's great if everything goes as expected and we get a response from the server. What happens if we don't?

Well, errors don't have the same nested data structure that the non-error results have, so we're going to have to make some changes to our pattern if we want to extract parts of the error reason. Pattern matching for just the 'error atom and the error reason, we can get a sense of what that data structure looks like:

Looking at just the data stored in the reason variable, we see:
If you check out the docs for httpc request and look under "Types", you will see that the error returned can be one of three things:
  • a tuple of connect_failed and additional data
  • a tuple of send_failed and additional data
  • or just unspecified additional data
In our example our additional data is a tuple of the address we were trying to connect to and the specific error type that for our failed connection.


Async GET

Now that we've taken a quick look at the synchronous example, let's make a foray into async. We'll still be using httpc's request function, but we'll need to use one of the longer forms were extra options need to be passed, since that's how you tell the request function to perform the request asynchronously and not synchronously.

For clarity of introducing the additional options, we're going to define some variables first: You can read more about the options in the httpc docs.

With the variables defined, let's make our async call: The sender receives the results, and since we sent from the LFE REPL, that's the process that will receive the data. Let's keep our pattern simple at first -- just the request id and the result data:
Needless to say, parsing the returned data is a waste of Erlang's pattern matching, so let's go back and do that again, this time with a nice pattern to capture the results. We'll need to do another request, though, so that something gets sent to the shell:
Now we can set up a pattern that will allow us to extract and print just the bits that we're looking for. The thing to keep in mind here is that the scope for the variables is within the receive call, so we'll need to display the values within that scope:
This should demonstrate the slight differences in usage and result patterns between the sync and async modes.

Well, that about sums it up for an intro to the HTTP client in LFE! But one last thing, for the sake of completeness. Once we're done, we can shut down inets:


Sunday, April 14, 2013

Getting Started with LFE on Ubuntu

For those that don't know, there is a fully Erlang Core-compatible Lisp-2 that runs on the Erlang VM and produces .beam files that can be used in any Erlang application. This manna from heaven is LFE, or Lisp Flavored Erlang. It was started about 5 years ago by Robert Virding, one of the co-creators of the Erlang programming language, and has inspired other similar efforts: Elixir (a Ruby-alike) and Joxa (a Lisp-1). (Incidentally, Robert has also created Prolog and Lua implementations that run on top of the Erlang VM!)

The new LFE docs site (a continuous work in progress) has some good introductory materials for the curious reader:

This blog post aims to bring some of those hidden materials into the consciousness of Ubuntu users. If you are averse to Erlang syntax, LFE opens up a whole new world to you :-)

The examples below assume Ubuntu 12.10.


Getting Erlang

Erlang R15B01 comes with Ubuntu 12.10. If that's all you need, then this will suite you just fine:
$ sudo apt-get install erlang
If you are wanting to test against multiple versions of Erlang, you should check out the kerl project, which lets you install a wide variety of Erlang versions (including the latest releases) side-by-side.

You'll also need git, if you don't yet have it installed:

$ sudo apt-get install git
Currently, rebar is required to build all the LFE files. If you're going to be building LFE projects, you'll want this anyway ;-) Rebar will be in Ubuntu 13.04, but it's not in 12.10, so you'll need to get it:
$ wget https://github.com/rebar/rebar/wiki/rebar
$ chmod 755 rebar
$ sudo mv rebar /usr/local/bin


Getting and Building LFE

Here's what you need to do to build LFE:
$ mkdir -p ~/lab/erlang && cd ~/lab/erlang
$ git clone https://github.com/rvirding/lfe.git
$ cd lfe
$ make compile
If you looked at your ./ebin directory when you cloned the repo, you would have seen that there were no .beam files in it. After compiling, it is full of .beams ;-)

Sidebar: A common pattern in Erlang applications is the use of a deps directory under one's project dir where dependencies can be installed without conflicting with any system-wide installs, providing versioning independence, etc. Managing these with rebar has been very effective for projects, where simply calling rebar compile puts everything your app needs in ./deps. Projects that depend upon LFE are doing this, but we'll cover that in a future blog post.


Using LFE

With everything compiled, we can jump right in! Let's fire up the REPL, and do some arithmetic as a sanity check:

How about a message to stdout?

Any form starting with : is interpreted as a call to a module. The full form is (: <module name> <function name> <arguments>). As such, you can see that we're calling the format function in the (built-in) io module.

Also, it's good to know that there are certain things that you can't do in the REPL, e.g., defining modules, macros, functions, and records. Erlang expects that these sorts of activities take place in modules. However, we can explore a little more before we write our first module. Let's use the REPL's set form and lambda to define a function anyway (albeit, in a somewhat awkward fashion):

That wasn't too bad ;-) We're seeing the external module call, again -- this time to the math library. Now let's use a module of our own devising...


Creating Modules

In another terminal (but same working directory), let's create a module in a file called my-module.lfe, with the following content:


Note that the module name in the code needs to match the file name (minus the extension) that you used for the module.

Back in the REPL terminal window, let's compile this module and run the defined function:

Let's add another function to the module that demonstrates the benefits of Erlang's multiple-arity support:

Re-compiling and running the functions, we are greeted with success:

Lastly, let's convert the power function we defined in the previous section using our REPL-workaround to a "real" function, defined in our new module:

And then let's try it out:

Perfect.

(Of course, it's rather absurd to redefine pow to exp, when there is basically nothing to gain by it ;-) It's just a quick demo...)


Conclusion

There's lots more to learn; this has been just a small sip from a hot mug o' LFE.

However, it's definitely enough to get you started and, should you be interested in following along in future LFE blog posts, you'll have everything you need to get the most out of those.


Wednesday, April 10, 2013

The Lambda Calculus: A Quick Primer

The λ-Calculus Series
  1. A Brief History
  2. A Quick Primer for λ-Calculus
  3. Reduction Explained
  4. Church Numerals
  5. Arithmetic
  6. Logic
  7. Pairs and Lists
  8. Combinators
To the untrained eye, the notation used in λ-calculus can be a bit confusing. And by "untrained", I mean your average programmer. This is a travesty: reading the notation of λ-calculus should be as easy to do as recognizing that the following phrase demonstrates variable assignation:
x = 123
So how do we arrive at a state of familiarity and clarity from a starting state of confusion? Let's dive in with some examples, and take it step at a time :-) Once we've got our heads wrapped around Alonzo Church's notation, we'll be able to easily read it -- and thus convert it into code! (We will have lots of practice in the coming posts to do just that.)

A Quick Primer for λ-Calculus

Here's one of the simplest definitions in λ-calculus that you're going to see: the identity function:
λx.x
This reads as "Here is a function that takes x as an argument and returns x." Let's do some more:
λxy.x
"Here is a function that takes x and y as arguments and returns only x."
λx.λy.xy
"An outer function takes x as an argument and an inner function takes y as an argument, returning the x and the y." Note that this is exactly equivalent to the following (by convention):
λxy.xy
Let's up the ante with a function application:
λf.λx.f x
"Here is a function that takes a function f as its argument; the inner function takes x as its argument; return the result of the function f when given the argument x." For example, if we pass a function f which returns its input multiplied by 2, and we supplied a value for x as 6, then we would see an output of 12.

Let's take that a little further:
λf.λx.f (f (f x))
"Here is a function that takes a function f as its argument; the inner function takes x as its argument. Apply the function f to the argument x; take that result and apply f to it. Then do it a third time, returning that result." If we had the same function as the example above and passed the same value, our result this time would be 48 (i.e., 6 * 2 * 2 * 2).

That's most of what you need to read λ-calculus expressions. Next we'll take a peek into the murky waters of  λ-calculus reduction and find that it's quite drinkable, that we were just being fooled by the shadows.


Tuesday, April 09, 2013

Interview with Erlang Co-Creators

A few weeks back -- the week of the PyCon sprints, in fact -- was the San Francisco Erlang conference. This was a small conference (I haven't been to one so small since PyCon was at GW in the early 2000s), and absolutely charming as a result. There were some really nifty talks and a lot of fantastic hallway and ballroom conversations... not to mention Robert Virding's very sweet Raspberry Pi Erlang-powered wall-sensing Lego robot.

My first Erlang Factory, the event lasted for two fun-filled days and culminated with a stroll in the evening sun of San Francisco down to the Rackspace office where we held a Meetup mini-conference (beer, food, and three more talks). Conversations lasted until well after 10pm with the remaining die-hards making a trek through the nighttime streets SOMA and the Financial District back to their respective abodes.

Before the close of the conference, however, we managed to sneak a ride (4 of us in a Mustang) to Scoble's studio and conduct an interview with Joe Armstrong and Robert Virding. We covered some of the basics in order to provide a gentle overview for folks who may not have been exposed to Erlang yet and are curious about what it has to offer our growing multi-core world. This wend up on the Rackspace blog as well as the Building 43 site (also on YouTube). We've got a couple of teams using Erlang in Rackspace; if you're interested, be sure to email Steve Pestorich and ask him what's available!


Monday, April 08, 2013

The Lambda Calculus: A Brief History

Over this past weekend I took a lovely journey into the heart of the lambda calculus, and it was quite amazing. My explorations were made within the context of LFE. Needless to say, this was a romp of pure delight. In fact, it was so much fun and helped to clarify for me so many nooks and crannies of something that I had simply not explored very thoroughly in the past, that I had to share :-)

The work done over the past few days is on its way to becoming part of the documentation for LFE. However, this is also an excellent opportunity to share some clarity with a wider audience. As such, I will be writing a series of blog posts on λ-calculus from a very hands-on (almost practical!) perspective. There will be some overlap with the LFE documentation, but the medium is different and as such, the delivery will vary (sometimes considerably).

This series of posts will cover the following topics:
  1. A Brief History
  2. A Quick Primer for λ-Calculus
  3. Reduction Explained
  4. Church Numerals
  5. Arithmetic
  6. Logic
  7. Pairs and Lists
  8. Combinators
The point of these posts is not to expound upon that which has already been written about endlessly. Rather, the hope is to give a very clear demonstration of what the lambda calculus really is, and to do so with clear examples and concise prose. When the gentle reader is able see the lambda calculus in action, with lines of code that clearly show what is occurring, the mystery will disappear and an intuition for the subject matter will quite naturally begin to arise. This post is the first in the series; I hope you enjoy them as much as I did rediscovering λ-calculus :-)

Let us start at the beginning...

A Brief History

The roots of functional programming languages such as Lisp, ML, Erlang, Haskell and others, can be traced to the concept of recursion in general and λ-calculus in particular. In previous posts, I touched upon how we ended up with the lambda as a symbol for the anonymous function as well as how recursion came to be a going concern in modern mathematics and then computer science.

In both of those posts we saw Alonzo Church play a major role, but we didn't really spend time on what is quite probably considered his greatest contribution to computer science, if not mathematics itself: λ-calculus. Keep in mind that the Peano axioms made use of recursion, that Giuseppe Peano played a key role in Bertrand Russell’s development of the Principia, that Alonzo Church sought to make improvements on the Principia, and λ-calculus eventually arose from these efforts.

Invented in 1928, Alonzo didn't publish λ-calculus until 1932. When an inconsistency was discovered, he revised it in 1933 and republished. Furthermore, in this second paper, Church introduced a means of representing positive integers using lambda notation, now known as Church numerals. With Church and Turing both publishing papers on computability in 1936 (based respectively upon λ-calculus and the concept of Turing machines), they proposed solutions to the Entscheidungsproblem. Though Gödel preferred Turing's approach, Rosser suggested that they were equivalent definitions in 1939. A few years later, Kleene proposed the Church Thesis (1943) and then later formally demonstrated the equivalence between his teacher's and Turing's approaches giving the combination the name of the Church-Turing Thesis (1952, in his Introduction to Metamathematics). Within eight years, John McCarthy published his now-famous paper describing the work that he had started in 1958: "Recursive Functions of Symbolic Expressions and Their Computation by Machine". In this paper, McCarthy outlined his new programming language Lisp, citing Church's 77-page book  (1941, Calculi of Lambda Conversion), sending the world off in a whole new direction.

Since that time, there has been on-going research into λ-calculus. Indisputably, λ-calculus has had a tremendous impact on research into computability as well as the practical applications of programming languages. As programmers and software engineers, we feel its impact -- directly and indirectly -- on a regular, almost daily basis.


Wednesday, April 03, 2013

Autoscale and Orchestration: the Heat of OpenStack

Several months before I joined Rackspace last year, there were efforts under way to provide an Autoscaling solution for Rackspace customers. Features that we needed in OpenStack and Heat hadn't been released yet, and there were no OpenStack experts on the Autoscaling team. As such, the engineers began developing a product that met Rackspace customer needs, integrated with the
existing monitoring and load-balancing infrastructure, and made calls to OpenStack Nova APIs as part of the scaling up and down process.

At PyCon this year, Monty Taylor, Robert Collins, Clint Byrum, Devananda van der Veen, and I caught up and chatted about what their views were of the current status of autoscaling support in OpenStack Heat. It seems that the two pieces we need the most -- LBaas and support for external monitoring systems (perhaps via webhooks) -- are nascent and not ready for prime-time yet. Regardless, Monty and his team encouraged us to dive into Heat, contribute patches, and in general, release our work for consumption by other Stackers.

Deeply encouraged by these interactions, we took this information to Rackspace management and, to quote Monty Python, there was much rejoicing. Obviously OpenStack is huge for Rackspace. Even more, there is a lot of excitement about Heat, the existing autoscaling features in OpenStack, and getting our engineers involved and contributing to these efforts.

In the course of these conversations, we discovered that Heat was getting lots of attention internally. It turns out that another internal Rackspace project had been doing something pretty cool: they were experimenting with the development of a portable syntax for application description and deployment orchestration. Their work had started to converge on some of the functionality provided by Heat, and they had a similar experience as the Autoscaling team. The timing was right to contribute what they have learned and align all of their continued efforts with adding value to Heat.

Along these lines, we are building two new teams that will focus on Heat development: one
contributing to features related to autoscaling (not necessarily limited to Heat) and the other contributing to the ongoing conversations regarding the separation of concerns between orchestration and configuration management. Everyone -- from engineers to management -- is very excited about this new direction in which our teams are moving. Not only will it bring new developers to OpenStack, but it is aligning our teams with Rackspace's OpenStack roots and the company's vision for supporting the growing cloud community.

Simply put: we're pretty damned pumped and looking forward to more good times with OpenStack :-)


Tuesday, April 02, 2013

Maths and Programming: Whence Recursion?

As a manager in the software engineering industry, one of the things that I see on a regular basis is a general lack of knowledge from less experienced developers (not always "younger"!) with regard to the foundations of computing and the several related fields of mathematics. There is often a great deal of focus on what the hottest new thing is, or how the industry can be changed, or how we can innovate on the decades of profound research that has been done. All noble goals.

Notably, another trend I've recognized is that in a large group of devs, there are often a committed few who really know their field and its history. That is always so amazing to me and I have a great deal of admiration for the commitment and passion they have for their art. Let's have more of that :-)

As for myself, these days I have many fewer hours a week which I can dedicate to programming compared to what I had 10 years ago. This is not surprising, given my career path. However, what it has meant is that I have to be much more focused when I do get those precious few hours a night (and sometimes just a few per week!). I've managed this in an ad hoc manner by taking quick notes about fields of study that pique my curiosity. Over time, these get filtered and a few pop to the top that I really want to give more time.

One of the driving forces of this filtering process is my never-ending curiosity: "Why is it that way?" "How did this come to be?" "What is the history behind that convention?" I tend to keep these musings to myself, exploring them at my leisure, finding some answers, and then moving on to the next question (usually this takes several weeks!).

However, given the observations of the recent years, I thought it might be constructive to ponder aloud, as it were. To explore in a more public forum, to set an example that the vulnerability of curiosity and "not knowing" is quite okay, that even those of us with lots of time in the industry are constantly learning, constantly asking.

My latest curiosity has been around recursion: who first came up with it? How did it make it's way from abstract maths to programming languages? How did it enter the consciousness of so many software engineers (especially those who are at ease in functional programming)? It turns out that an answer to this is actually quite closely related to a previous post I wrote on the secret history of lambda. A short version goes something like this:

Giuseppe Peano wanted to establish a firm foundation for logic and maths in general. As part of this, he ended up creating consistent axioms around the hard-to-define natural numbers, counting, and arithmetic operations (which utilized recursion).  While visiting a conference in Europe, Bertrand Russell was deeply impressed by the dialectic talent of Peano and his unfailing clarity; he queried Peano as to his secret for success (Peano told him) and them asked for all of his published works. Russell proceeded to study these quite deeply. With this in his background, he eventually co-wrote the Principia Mathematica. Later, Alonzo Church (along with his grad students) sought to improve upon this, and in the process Alonzo Church ended up developing the lambda calculus. His student, John McCarthy, later created the first functional programming language, Lisp, utilizing concepts from the lambda calculus (recursion and function composition).

In the course of reading between 40-50 mathematics papers (including various histories) over the last week, I have learned far more than I had originally intended. So much so, in fact, that I'm currently working on a very fun recursion tutorial that not only covers the usual practical stuff, but steps the reader through programming implementations of the Peano axioms, arithmetic definitions, the Ackermann function, and parts of the lambda calculus.

I've got a few more blog post ideas cooking that dive into functions, their history and evolution. We'll see how those pan out. Even more exciting, though, was having found interesting papers discussing the evolution of functions and the birth of category theory from algebraic topology. This, needless to say, spawned a whole new trail of research, papers, and books... and I've got some great ideas for future blog posts/tutorials around this topic as well. (I've encountered category theory before, but watching it appear unsearched and unbidden in the midst of the other reading was quite delightful).

In closing, I enjoy reading not only the original papers (and correspondence between great thinkers of a previous era), but also the meanderings and rediscoveries of my peers. I've run across blog posts like this in the past, and they were quite enchanting. I hope that we continue to foster that in our industry, and that we see more examples of it in the future.

Keep on questing ;-)