Sunday, August 23, 2009

'She was a cute little ruin that he pulled out of the rubble. Now they are both living in a soft soap bubble' - Declan McManus, 2002

Another REST versus SOAP post to chew on. I usually steer clear of this well-worn subject like the plague, but for whatever reason started skimming this post and it's kind of useful. I know, I know - REST isn't a protocol, it's an architecture style that's thrown about loosely for a lot of things and you can (in a narrow sense, with enough effort) "do" REST using SOAP (2.0).

Maybe it's the pragmatist in me but I think it comes down to what the consumer needs are. You remember them, right? What do they look like? Do they need the WSDL trappings or any of the WS-* protocols that require SOAP as their base? Are their message exchange patterns consistent with a RESTful style or a more message or RPC based approach?

The answer can be neither - some of the best Web Services out there today are not either REST or SOAP. Many of Google and Yahoo's Web Services are "REST-like" or "REST with extensions". Some of Amazon's Web Services are SOAP, some are quasi RESTful, some are Query style and some RPC-ish.

It's all about what the current consumers want. Make the implementations such that it'll be easy to add additional flavors if you come across new consumers who need something else, but do it then, not now. They're not your consumers yet.

You might be gun-shy because you've made those 'narrow' decisions in the past and then it took so much effort to add additional capabilities like this after the fact. That's not a problem of the interfaces you expose, though - that's a probably with the implementation. Doc, it hurts when I do this. Then don't do that.

You do remember your consumers, right Brett? Oh, I'm sorry - did I break your concentration?

Perhaps if all of the great technology debates were framed within the bounds of Pulp Fiction, the world would be a less complex (if more violent and foul mouthed) place.

Actually, it could never be more violent than it is already (and this would be over-the-top movie stylized mayhem).

And if it's more foul mouthed, what the fuck is wrong with that?

Friday, August 21, 2009

I stand convicted of not showing my conviction more

Wow - great post here about responsibility and conviction. I know I'm guilty-as-charged at using the passive 'we' when pointing out 'challenges'. There is a difference between using tact and turning the work place into pre-school. "Everyone gets a prize, no one is better or worse than anyone else. Nap time between 2 - 4."

If everyone gets rewarded along with those who deserve it, the sharp and motivated folks on the team see that, then they'll be motivated to work somewhere else.

Before you know it, you've spun Darwin upside down with a corporate survival of the complacent and that becomes the prevalent attitude of the organization.

And that's no survival at all.

Thursday, August 20, 2009

I'm still waiting for a World World Conference

You know you've arrived as a technology when you get a 'World' conference named after you. Such apparently is the case with Apache Hadoop, the Java-based distributed data storage and analysis framework built by Doug Cutting and others initially while developing the Lucene-based Nutch Search Engine, on a foundation of its own distributed file system and Google's Map-Reduce programming model. This might seem like a particularly narrow focus for a technology conference but it goes to show the many and varied uses of Hadoop by the industry.

The conference won't really be about Hadoop, of course - it'll be about getting a look at the details behind all those interesting problems people are using the framework to help solve. This class of problem is anything but narrow. Massively distributed, massively concurrent data storage, retrieval and analysis/processing challenges are an ever growing reality for an ever growing percentage of IT shops across the world. It certainly is where I hang my hat (or where I would hang my hat if I was losing my hair and needed to wear one). We don't yet use Hadoop (or the techniques behind it) but we probably should.

Tuesday, August 18, 2009

Layin' about lying in bed - maybe it was something that I thought I said (feverishly caching agile memories)

I'm running a fever and have given up all hope of concentrating on anything work wise today so it's time to cruise through the blogosphere, catching up on my tech favs. Of course, what might seem genius to me now could be crap when I'm lucid again.

Keeping that in mind ...

I recommend taking a thorough read through this post on Java memory problems. It might be seemingly obvious to many that have been in the Java game for awhile, so it's all the more reason to give it some attention. We tend to gloss over what is 'obvious' and then are surprised when we're tripped up by it. And we invariably are.

Here's an interesting white paper on rolling out agile to large enterprise. It's a good overview of large scale organizational and process structure. Of course, read it with a realistic attitude. We all know there is never an end-all be-all anything and this goes in particular for things of an organizational nature. Organizations are comprised of people and those pesky carbon-based water bags are always the biggest wild card, no matter how much we might want to make analogies with inanimate objects (it's an apples-to-carburetor comparison, which isn't very useful). The kinda folks that live by these analogies are of course human themselves (well, most of them are, most of the time). Which means that they inevitably end up demonstrating the fallacy of their own position, to others, anyway, if not to themselves.

A certain executive standard bearer for the periodic table very recently compared software developers to automated dishwashers during a discussion. Ironically, this was in a meeting whose primary goal involved quantifying the monetary value of agile. He had a valid point (there is not a 'bad' and 'good', binary with people - it's a scale) but the analogy used to make it was bogus (and the scale off - it's not a linear scale by any stretch).

But I digress, check out the white paper. All other things being equal, you have to organize such that your software boundaries align to the extent they can with your organizational boundaries so that you minimize cross org communications requirements (that's overhead not directly related to the value prop) and at the same time can better instill a sense of ownership and pride within the team by delivering deploy able, usable software.

But maybe that's just the fever talking ...

Hey good news for those of us who are EHCache users and have a stake in its continued success and growth but who are also fans of (or at least have seen some exciting potential in) Terracotta. Terracotta has hired on EHCache maintainer Greg Luck and will now be distributing and supporting the EHCache code base. They be gearing up to be gunning for Cameron and Oracle Coherence, so says Eric in not-so-many words.

Friday, August 14, 2009

Teenage Wasteland and Virtual Spring Cleaning

There's an interesting read on InfoQ (well, there's always something worthwhile there, it seems) that adapts Shigeo Shingo's Toyota Production System (TPS) Seven Wastes to Software Development. It's a summary of the ongoing work at Agile Software Development and certainly not a new topic. I've bitched these seven wastes here at least once before. It might be my biggest professional/corporate pet peeve outside of misaligned organizational boundaries. And it's much 'easier' (relatively speaking) to solve.

One of the wastes listed is having 'completed' software that is not deployed into production. This is an all-too-common oxymoron in corporate technology shops the world over. As the saying goes, 'It ain't complete until it hits the street'. Or something to that effect.

Remember, deployment is the goal, not 'code complete' or 'QA passed'.

But the genesis of this is that damn misaligned organizational boundary problem sticking its nose into things again: when people are so stove-piped and specialized into their little corner of the product lifecycle, why be surprised when they think things are 'done'?

Most of the companies with this organizational malady were likely not always afflicted in this way. It's the result of growth (sometimes, very rapid growth). Fixing it is not easy but nothing worthwhile ever is. I think I've reached my quota of cliches for the day.

Tuesday, August 11, 2009


I see that Grady Booch is brewing up a software architecture handbook. I'll certainly keep an eye on this.

Booch is still a very good writer and his ideas around process and architecture in the early 90s were at the time pretty revolutionary. Fourteen years later, Object Solutions is still one of my favorite tech books.

Over the past decade, UML and the Unified Process (UP) have been targeted as 'heavyweight' and anti-agile. But much of the real criticisms are misplaced - it's not the modeling language or development (meta) process that's at fault but the literal way in which they've both been interpreted and regurgitated.

And I blame Rational (later, IBM), with their heavy handed tooling, particularly the Rational Unified Process (RUP).

RUP != UP.

Hell, UP doesn't even equal UP. It was badly named. Shoulda been Unified Meta Process or Unified Process Template.

UP was the first truly 'mainstream' development process (or process-like thing) espousing an iterative and incremental approach. Unfortunately, because it was in fact a meta-process and attempted to be an all encompassing template rather than a singular process or method, mainstream IT started doing what they do best: blinding/literally applying *everything* in it.

It was only a matter of time before you had RUP dictators mandating the whole kit and kaboodle, and then you could hear the gears of progress grinding to a halt, budgets being blown and trees the world over mourning their genocide in the name of the grand Heavy Weight Methodology.

Yes, Grady worked for (in fact, founded) Rational and so deserves a lot of the blame. But he was off 'on tour' as a tech rock star by the time RUP and the prescriptive tooling were rolled out.

And none of this changes the fact that the man can communicate effectively. Both his writings and his talks are unfailingly interesting and usually useful too.

So I'll keep my eye on this latest from one of my tech heroes of the 90s. He may have been subsumed in my eyes by Fowler and Goetz and Bloch and Beck, but I'll continue listening.

He did after all, give us Clouds (no, not cloud computing - the arguably even more influential cloud diagrams).

Speaking of Grady, check out this discussion he had with Michael Cote on a variety of topics last year. Concurrency, DSLs, UML, etc. Pretty good. Boy, Grady is looking ever more the part of the wise old guru on the mountain top (or maybe just more like his 'namesake' from Sanford and Son). Pretty far from his days at the Air Force Academy.

Hey, I still buy the new Springsteen albums when they come out, and the Clash overtook him in my book back in 1978.

Monday, August 10, 2009

In the land of the blind, the one-eyed man is visionary

Apparently not satisfied with the quality of on-the-market technology talent in the real world, Amazon held a job fair on Second Life recently. Huh. Seems they were looking to fill actual, flesh-and-blood positions.

It's an interesting angle. Attracting and retaining quality people are two of the hardest things an organization can do. Talent retention is the more complicated of the two, but it's not a discrete thing (it's the culture and vision and day-to-day operation of the organization itself that is going to do the heavy lifting there).

Attracting good people also has to do with vision, culture, day-to-day stuff, since that often manages to leak out to the general community (especially if the community is small enough) but there is a discrete, deliberate and focused part of attraction that ya gotta work on as well.

Let's take an organization that is primarily a Java-based technology company. In the Philadelphia area in mid-2009, the Java community is fairly large but the percentage of those active in social events such as the Java User Group (JUG) is relatively small. They like to talk, though. About what makes an organization worth working for and what doesn't. Word of mouth about any Java centric organization of size spreads pretty quickly, for good or ill. It's uncannily accurate in the facts but rarely possesses all of the truth (usually because even the folks working at an organization likely don't know what that is).

It takes a concerted 'marketing' effort to help keep the community vibe positive, especially if other parts of your organization are unknowingly tipping the scale in the other direction. You have to make your technology investments very public, especially if you are digging into areas of interest to local talent. Give your best and brightest time to present at local events and actively participate in forums and such. The best software developers will always recruit more real talent than an army of recruiters will.

If you don't have technology visionaries somewhere in senior management, though, you'll always be swimming upstream. You might be able to band together with other like-minded folks and if you work hard enough and are lucky enough, turn the tide and reverse the flow. If the senior management realizes that the vision thing is missing among them and they trust enough in the rank-and-file technology leadership, this might just work. If however you have an activist executive force that feels they possess the vision, two things will happen:
  1. they'll be right and the organization will fail because the grunts on the ground do not believe in them
  2. you'll be right and the organization will fail because those on high don't believe in the ones charged with delivering on the dream.
In the end, the failure in either case won't be spectacularly obvious to outsiders and it won't happen quickly. Think white dwarf rather than supernova. In fact, it will likely be declared as a small victory of some sort. I'm sure if you've spent more than a few years in corporate America, you'll recognize these hollow "victories".

If you're committed in any way to your organization, ya gotta fight the power to prevent these "victories" from happening.

Software people know that "fail fast" can be a very good thing indeed.

New and Noteworthy,Tidbits and Doodads

Check out Building better Unit Tests on InfoQ - it's Mom and Apple Pie but it never hurts to have it repeated. On a somewhat related topic, new bits in JUnit version 4.7. Also, a post I missed when it appeared a couple of weeks ago from Fowler that is, as usual for him, thought provoking.

And wise words from Uncle Bob: All systems are collections of historical compromises. I'd add that the longer the compromises have been around, the more likely they've given birth to descendant compromises (such a family tree can span several generations in the space of a few short years). Often this family amasses a fortune in technical debt (rare is the system that has a family filled with technical trust fund babies - if you find one, latch onto it and don't let it go).

BDD != functional testing

 Why did Behavior Driven Development morph into "functional testing", somehow separate from Test Driven Development? That was certainly not Dan North's intent when he devised BDD. Ironically, he came up with it as a response to people's misconception of TDD as being about testing. He thought that creating a 'new' way of doing things ("new" in the window dressing sense) with precise language would put the emphasis on behavior rather than testing. But behavior doesn't mean functional testing. You can test the behavior of a single method of a single class using BDD and that's as narrowly scoped, white-box, unit test as you can get. If you express it using the BDD ubiquitous language, it's BDD (whether through tooling, or a Domain Specific Language or simply using Given/When/Then naming conventions within your unit test suite).

Time off for good behavior

Dan North has a great talk on Behavior Driven Development (BDD) posted over on InfoQ. I love the genesis behind BDD: it started with Dan being frustrated over people thinking that Test Driven Development (TDD) was about tests.

TDD is really about a way of designing and developing with a consumer/client focused bent: you start by walking in the shoes of your customer, using the software you haven't built yet. The customer in the case of TDD might be you - maybe you'll be consuming the methods of the class you're about to develop in order to develop a higher level class. But that's just a matter of scope and doesn't change the fact that you need to develop from the perspective of the person that will be using that particular piece of software.

We call these prototypical consumption exercises "writing tests", but in Dan's mind that was looking at it the wrong way - it's really focused on the behavior of your software. Thus Behavior Driven Development.

BDD has since morphed into something more than just another way of looking at TDD, providing a textual grammar (a "Domain Specific Language", albeit a small one) for specifying "given these conditions, when I execute this action, then I should have these results". Given, When, Then. Pretty simple.

"Getting the words right", Dan North said. Words being so important and so often dismissed. Getting words wrong (or at least not agreeing on or thinking about the same things when you say something) might be the single biggest source of pain in our industry (indeed, in the world).

This is one of the many reasons why I'm a big proponent of another agile discipline, Domain Driven Design (DDD), is its idea of a Ubiquitous Language.

Anyway, "Given, When, Then". Sort of acceptance criteria-like - thus BDD is often labeled an 'acceptance' testing process. I personally think that boxes into something smaller than it deserves: I think it can be used for unit testing and functional testing and integration testing too.

The idea that you can execute textual user stories is pretty powerful. Done right, it implies no more (or at least minimal) separate "requirements documents" or "test plans" or the dreaded "traceability" documents that often accompany them. That's where the BDD tools come in. And there are a lot of them: RSpec, Cucumber (for Ruby) and JBehave (Java) are probably the most popular.

But it's not about the tools, be clear, but rather the people and their interactions - having more of the latter with the former. If you haven't, take a look: there is a wide world of sports beyond your father's unit testing.

There is also a rockin' talk on "pragmatic Scala" - two of my favorite things. Scala has everything I love: a strong sense of concurrency built into the language ('actors!'), static typing but with a brevity of syntax, a functional bent ('to iterate is human, to recurse divine'), runs on the Java Virtual Machine. Great stuff. Good for you. The details of the language around how you invoke methods and what symbols you can use to name them has resulted in a lot of Scala code that's hard to read, but the idea of Scala is absolutely wonderful. I'm only just thinking in terms of its pragmatic value, with visions of Perl-like obfuscation dancing in my head. But still, check it out.

"Baa Baa Mr. Sheep. Careful, you're walking all over your own self now. Walk on, Mr. Sheep. Walk on." - R. Newman, 1977

The common take on effective governance is that it first and foremost requires an effective governor, with perhaps second being a decent overall legislature. It's hard to argue that these are necessary ingredients; however, my vote for top dog is a level of participation and, to take it further, ownership on the part of the governed.

This ground up governance is usually down on the priority list for corporate leadership in my experience, and I think it's mostly because that's a much harder thing to realize. It's gotta start with the organizational model you choose: small teams of multi-disciplined individuals stand a better chance of enabling ground-up governance.

When I say 'multi-disciplined', I don't mean that everyone on the team can develop software, write requirements, test, coach, manage workloads, and so on. I do mean that everyone takes an interest in all those things to a degree. That a programmer is not afraid to attempt writing requirements, that a business analyst doesn't shy away from testing, that a quality assurance specialist doesn't balk at some light software touch-ups. You have your roles and your primary job but it shouldn't box you in and limit your ability to help the team: that's the priority in this ideal.

Ideally, each of these teams would own one or more relatively discrete and preferably deploy able, deliverable, releasable products and would be responsible for those products from birth to death. Members of this kind of team are much more likely to feel empowered than specialists that belong to very large teams and produce a piece of the product, perhaps never to deal with it once development is finished, or never to see it except during "QA testing".

Even small teams that are walled off into these specialized corners have a hard time feeling like they own and are responsible for something.

It's the difference in accountability dynamics: the ideal being more akin to parent/child while the more common org resembles a the math teacher/pupil relationship (or maybe guard/inmate is more accurate).

It's also the difference in efficiency and frustration levels. When you are specialized, and your team is only one piece of a much larger puzzle, you have to cross organizational boundaries much more frequently to get things done. And if Lean Manufacturing has taught us anything, it's that crossing boundaries is very expensive in terms of time and waste. Lean, unlike many concepts/ disciplines/processes/analogies, does translate pretty well from manufacturing and engineering to software development.

This isn't news - there is plenty of empirical evidence to support this from a number of sources. Scrum, XP and other agile disciplines have been espousing this for many years now. Taiichi Ohno proved it back when the Eniac was state of the art.

This is all aimed toward an ideal, I am not kidding myself.

That's the hard part, especially if you've already got a workforce that through attrition (due to previous practices and culture) has rid itself of a large percentage of precisely those kind of people that would thrive in this ideal environment. It also means you've likely retained a goodly bunch of people that positively fear and loathe this.

But ya gotta start where you are and and slowly reverse that attrition cycle.

Empowerment and self management of those doing the work is key to this. Hey, I'm not suggesting a proletarian overthrow of the bourgeoisie management (well, maybe I am to some degree, but with a wholly capitalistic bent). Rather, that we attract and retain those developers and testers and business analysts and product managers that are excited about ownership, decision making and self management in addition to their primary technical skills. In so doing, we can gradually tip the scale, making management more about technical coaching and mentoring. If the managers can't make this transition, either through philosophical opposition or because they don't have the chops, it's time to lose them through position elimination and attrition.

Sure, you still need executive and administrative/HR management in any corporation of size, but the percentage of management to worker bee in most IT departments I've worked with is tipped way out of balance, in some cases with more managers than developers or BAs or testers in a given group (and as a consultant for many years, I've worked in a good cross section of the industry). Mainly this is done because one naturally wants to advance in their profession. Usually that means moving into management and often, the company loses twice (you lose a valuable and eager technical asset and you gain a mediocre and unenthusiastic manager).
But it is hard. Not nearly as hard, though, as fighting against the natural ways in which individuals organize absent traditional governance - throughout history, survival of the fittest as shown this to be true.

Local optimization is only good if ownership is sufficiently complete of the things you're optimizing within that locality: if it isn't, likely as not your optimization will adversely impact another piece of the total package and in the end, the total package is what counts.
After all, you don't hear folks say, "Boy, this web site keeps crashing on me - it's rarely available but when it is, did you see that nifty calculation widget? Makes all the downtime worthwhile".

The other thing is unnecessary waste. One of the "lean" questions often asked is 'how much of your day is spent working on communications with other groups versus work that directly benefits the end product?' We should all aspire to minimize this with the understanding that we can't wholly eliminate it. Communications and traceability documents are waste (they do not directly benefit the end product). They may or may not be necessary waste. If they are necessary, rather than just saying 'that's the way it is', why not look further to see if we can change the landscape such that they become obsolete and unnecessary?

Mother nature, in the end, only yields so much. As a wise woman once said, 'it's not nice to fool Mother Nature' - just as true with organizations as it was with butter.

See Scott Ambler, Kelly Waters, Ken Schwaber, and Phillipe Krutchen for starters and google +organization +software +ownership +empowerment +self-managing +benefits +challenges

"He could have had any woman in the world - but none could match the beauty of his own hand - he was not 'master of his domain'"

The goings-on of some 'masters' of their professional domains ...

As usual, some great stuff in the last couple of days posted to InfoQ:
If you don't already indulge, I recommend checking out Sam Pullura's JavaRants. Sam's a bright guy that hangs out at Yahoo these days as Chief Technologist. He was one of the original Weblogic Engineers and apparently made a pile of dough when BEA bought Weblogic back in 1998. He stuck it out for a couple of years at BEA and has since been bouncing around as a technical advisor and angel investor with various startups - slinging Java code from time to time.

Steve Souders is another guy you should check out. He's the creator of YSlow and all around front end engineer extraordinaire. He has jumped ship from Yahoo to Google (they must have some sort of engineering exchange program going on with all the cross pollination between the two). His blog is always entertaining and informative and he's got another book out, similar in style to his first one. Both are useful.

Drinking your way to drydock

One of today's areas of focus for me at work is in producing some useful training materials on embedded documentation for software and best practices for java development in our world of concurrency and multi-core systems. The guy that is tying these two disparate ideas together for me this Monday is the esteemed Brian Goetz.

Goetz is a master of caffeinated concurrency, well known in Java circles (as I've written here before, please grab Java Concurrency in Practice at once if you haven't and also catch his InfoQ presentations). Please check out 'Are All Stateful Web Apps Broken?' if you work with Java Web Applications - turns out most of us are deploying these things half blind with our fingers crossed and praying that the gods of concurrency will be kind to us. There is a better way.

It turns out that Mr. Goetz also has a great (though somewhat dated) write-up on best practices of software embedded documentation for java. It was written in 2002 and the JavaDoc tool has been updated a bit since then (for instance, "" didn't exist yet for package-scoped documentation and you had to include a package.html file mixed in with your java). Still, bar-none the best single article I've read on the subject.

I hope you know this will go down on your permanent record

Things other than work (or maybe instead of work) that I am paying attention to today (technology version):
Well, back to work. Trying to come up with a way in which to best get across to folks some useful (and to contrast it, some very not-so-useful) ways in which embedded software documentation (in our case, JavaDoc) can be applied. Mainly, that it not simply repeat what the code should already tell you (and if the code is not telling you, chances are that the comments applied to it are smells pointing to a refactoring opportunity). Code should not be commented, in general. Methods can be commented occasionally, but should be scrutinized to ensure the comment doesn't stink (ensure that it's telling you something valuable that is not possible to express in the method code). Classes, packages and modules are the things needing the comments: you want good example usages. You also want doc on policies that can't be expressed in code (thread safety, for instance), though annotations can be helpful in this case (Brian Goetz has some recommended concurrency annotations in his seminal Java Concurrency in Practice book.

JavaDoc at the package and module level via a class can be extremely helpful and is too rarely done. I can go generate Javadoc for a module of code and even if there is no explicit JavaDoc markup made for the individual classes and methods, it provides me a lot of valuable information out of the box; however, it tells me nothing about the purpose of the package or module as a whole. That's what's missing and what often requires the creation of some external document that very quickly gets (and stays) out of sync with the code.

I'm trying to put together some simple examples that get this point across. Keeping this point across time and a large organization is the challenge. Those that will instinctively 'get' this already do, so I don't worry about them; it's weeding out those that never will - since I'm not King and can't fire them on the spot - to get to those that just need a little bit of encouragement, some breathing room and a new perspective. This problem isn't specific to this topic - it's teaching and adoption and empowering everyone to own our software, to own their own career and to know that you're not just a drone in Sector 7G waiting for the whistle to blow so you can slide down the dinosaur.

Lean on me

I've been reading up a lot on the roots of lean manufactoring. It's become in vogue to apply these principles to the practices of software development and while I applaud that, I think it's important that one understands the genesis of an idea, especially if that idea has lived through several generations and has been translated from one very different arena (in this case, from automobile manufacturing in the 50s, 60s, and 70s as part of the Toyota Production System) to another (envisioning, developing, deploying and managing software in the 21st century). Most folks that have a cursory understanding of lean tell me it's about the reduction of waste - throwing away all of those things that do not directly add value to your endeavor. Queues, inventory, task lists, etc. are all bad because they are things that are sitting around and not adding value. 'Just in Time' production - that's the key. That's all certainly core to the 'what', but unless you dig into Taiichi Ohno's teachings, you miss the 'how'. Central to the 'how' is the role of 'management'. To Ohno, a good manager is a master craftsman, mentor, and teacher. Managing is Teaching. That is the real power of this philosophy. If you believe management is about anything else, then you've already lost. You may go, grasshopper.

Oh, But I was so much older then - I'm younger than that now

What's worse - poor management or poor leadership? I saw the former get a beat down from the latter today and much as I am pained by bad management, it can be worked around. I don't think bad leadership can be, especially if it is unquestioned. It brings to mind that old Hans Christian Andersen classic, ' The Emperor's New Clothes'.

Sometimes I feel like a deck hand on the Exxon Valdez with Capt Joesph Hazelwood at the helm. With all the officers going on about how fine the good Captain navigates around those icebergs after polishing off his third bottle of rum - please, have another, Captain! That's a great idea!

Hans, I wish you were here ...

Trying to learn how to walk like the heroes we thought we had to be

The brilliant Boy Howdy scribe Lester Bangs once wrote, "A hero is a goddamn stupid thing to have in the first place and a general block to anything you might wanna accomplish on your own."

He wrote this even as he shamelessly idolized (and in the next breath, often demonized) Lou Reed. Lester, like most great originals, was aggressively (even proudly) contradictory.

The rest of us mere mortals are just as contradictory - I think the difference is that we usually try and reign in (or otherwise hide) this 'flaw'.

I'm not sure when consistency became king. It's great for a lot of things - take for example the discipline of engineering. Which is just one of the reasons I contend that software development != software engineering (in fact, that there is no such thing as software engineering).

There are certainly elements of software development that require engineering-like rigor. For these elements, I am fanatical about consistency. This is for the low-level stuff, naming, implementation patterns and idioms - the kind of thing that is within the realm of checkstyle. But architecture and design, even when applying common patterns and using a common (ubiquitous) language, lives more within the world of art than it does science.

The more we embrace this, the better off we'll be.

Everything's in context - nothing is absolute

Context is everything - the best decision in one context could very well be the worst move in another. That's why I love Domain Driven Design's Bounded Contexts and why I think the concept deserves consideration beyond the world of design and architecture. Organizational structure, for instance. Find your bounded contexts and organize around them. Groups should be dependent on other groups only via Context Maps.

Speaking of Domain Driven Design, why has something with so many good ideas in so many contexts only seemed to have really taken root in the .NET community? The contexts for its use are usually programming in the small (places where objects and components and the like naturally live) and not so much for programming in the large (distributed architectures like the web). But the resources that constitute a RESTful architecture (and I definitely have some RESTafarian blood in me) are often internally constructed of good old objects and components. It just seems like as a industry, we're slowly losing (or perhaps never really had) a culture of design in software development. I'm not talking here about Big Up-Front Design - I'm talking design as part of software development.

I'll argue that you can't really design software without implementing software ( that it's not an engineering discipline). But you can certainly implement software without having designed anything (or at least without having designed anything well).

I think Big Up-Front Design has in fact killed actual design, the resulting software rarely looking much like The Design as Written, perhaps on tablets of stone. Usually those tablets have languished on the shelf for a good while since they were originally inscribed, perhaps by the architect on the hill (speaking as I do from my perch on high). The poor soul that has been assigned to "implement it" is left to figure out if there is any value to the design or if it's simply an obstacle to the development of a useful solution that meets the needs of the customer now (assuming these needs have been articulated in the first place: but that is another topic entirely).

Look at the man that you'd call uncle, having a heart attack round your ankles

I've been re-reading The Mythical Man-Month, still relevant after 34 years. One of the gems concerning the state of software documentation: "The trees are described, the bark and leaves are commented, but there is no map of the forest". This applies even more today than it did in the mid-70s when it was written.

'Let me have my little vicious circle' - Don Birnam

InfoQ has posted a great presentation by ThoughtWorkers Rebecca Parsons and Martin Fowler on the role of architecture in agile organizations. I highly recommend anyone who works or has worked in an organization of any size to check it out, particularly if you happen to be an executive within that organization and you're grappling with how best to (re)organize your technology department. Stu Iridium, I'm looking at you.

Also over on InfoQ from last week, the master of percolating parallelism, Brian Goetz, provides a window into java SE 7 concurrency enhancements. I always recommend checking out anything Brian has to say, as he belongs to my pantheon of technology heroes (taking a seat alongside Joshua Bloch and Martin Fowler).

As I wrote that last sentence, I couldn't help but reflect on my tech heroes past. In the late 80s/early 90s it was Kernighan, Ritchie, and Thompson from Bell Labs, along with W. Richard Stevens, and Brjarne Stroustrup. Later in the 90s, it was James Gosling and Grady Booch. I guess technical heroes have a relatively short shelf life (not unlike technology itself).

Group Captain, please make me a drink of grain alcohol and rainwater - Gen. Jack D. Ripper

What really smokes my rhino these days are guys who should know better that keep insisting on adding more and more fine grained configuration to our provisioning and deployment processes - We need more moving parts! We need total control! We need a binary editor so we can change byte 935 from 0110 to 0111 within dickcheney-Version666.jar in test888 because it's Tuesday at noon and my aunt has bursitis in her shoulder!

Yes, and our people can handle this so well! God forbid we be allowed to create a small set of complete machine images with all apps, configs, data, everything - type install and *nothing more*. No 'runbook' or 'manifest' with a bazillion manual steps and two hail Marys. If anything needs to change, you change it in source control and create a new build (which kicks off your unit tests, deploys the image, and then kicks off the functional, integration, and performance/scalability tests). All without any more dumbitude than automation can have. Granted, automation can certainly have dumbitude if its creator has it, but those pesky automated unit tests will kick in and raise the dumbitude detector flags in the form of red bars and if they miss it, the integration tests catch it, and the functional tests after that. At least it gives us a fighting chance.

I'm not bitter - and it is Friday. Amen.

Coding with Coad

Whatever happened to Peter Coad? He and Mark Mayfield wrote, in my opinion, the book on Java-based Object Oriented Design in 1996 (2nd edition appearing in '98), and it ain't for Javaphiles alone. Their criteria for making the choice between using inheritance or composition in a given situation is alone more than worth the price of admission. It's something that more developers should read and apply: in short, use inheritance only when:
  • Your object "Is a special kind of" the candidate parent object and not “Is a role played by” that object
  • You will never need to transmute your object to be in some other class
  • Your object extends rather than overrides or nullifies behavior
  • Your object does not subclass what is merely a utility class
  • For problem domain objects, your object is "a special kind of " role, transaction, or thing
Now, Coed and Mayfield go on to explain under what circumstances you would reasonably break these rules but that such circumstances are indeed exceptions.

To answer my opening inquiry as to the whereabouts of Mr. Coad, it seems that once his company TogetherSoft was acquired by Borland (I truly miss TogetherJ ControlCenter), he sort of drifted out of the software development game. He's a pretty religious guy, they say, and he's focused on that arena of his life. He and I don't share that particular passion, but I'm sure whatever he's doing there, he's doing it well. It looks, from his website, that he's also started up a charter jet service, along with launching several biblical software applications. Modeling in Color, Feature Driven Development, Party, Place or Thing: all I know for sure is that the software development profession is the poorer with his absence.

'I wish that for just one time, you could stand inside my shoes. You'd know what a drag it is to see you' - Robert Allen Zimmerman, 1966

I can't believe I missed this presentation when it first came out - very instructive and relevant in my neck of the woods. The advice is easier given than taken (and its lessons applied). But it's gotta be better than business as usual.

Anyway, on to other things.

Rummaging through the blogs, I thought that these posts were worth a plug:

Scala Columdrum

Man, I really want to love Scala:
  • It runs on the JVM and interacts well with Java.
  • It is statically typed. Despite dynamic language proponent's insistence that static typing is overrated, unit testing will catch all runtime type mismatches, etc., testing can only ensure the presence of bugs, not their absence. Compiler-flagged type enforcement can ensure the absence of this very narrow but prevalent class of error.
  • It has sensible type inference. The pain of static typing comes in all the boiler plate verbosity usually accompanying such a language, just to shove the compiler's face in it ('see this type??? here it is - remember it!'). Scala's type inference allows it to approach the brevity of dynamic languages such as Ruby, removing one of the biggest arguments against static typing.
  • It encourages immutability with val-ues as well as var-iables.
  • It provides a nice framework in the core library around concurrency modeled on Erlang's actor that should help you write efficient and, just as importantly, (more apt to be) correct multi-threaded applications. (See also this evaluation of actor concurrency, Scala, Erland and Java.)
  • It is fully object oriented but also a functional language, so you can be functional when you need to be ('to iterate is human, to recurse divine').
But I can't love it. In fact, in practice, I don't even think I like it.

Its 'Achilles Heel', at least for me, is syntactical in nature. You can basically name methods using any symbols you'd like, enabling operator overloading - sort of, actually 'operators' are all just method calls here - in addition to Perl-like tendencies toward self obfuscating code. Operator overloading was one of the many things I disliked about C++ and I don't like it any better 20 years later, even if the rules are at least consistent with Scala (anything can be a method call and "." and "()" are optional for single arg'd calls). The problem is that this is worse than just operator ambiguity, it's *anything* ambiguity. This and other syntax choices can and have resulted in a body of Scala source that is downright impenetrable. And this is directed at well-written Scala: I'd hate to see the bad stuff. Perhaps it's spelled out best here.

Scala is young but there are numerous high profile web applications starting to use it and a number of exciting frameworks are being built on its back, the Lift web framework being perhaps the most well known of the bunch.

Can I get past this syntactical annoyance? Sure. Might I have to? It's possible. We're still waiting for an alternate language running on the JVM to reach critical mass. But with the Oracle acquisition of Sun and thus ownership of Java the language, it's hard to say what will happen to it - nothing much for some time, obviously, given 14 year proliferation of Java-based applications the world over, but eventually. Even without Larry Ellison's imminent domination of our planet, the language is getting awfully bloated (the O'Reilly Nutshell book comes from a mighty big fuckin' nut!) and conversely is missing some essential ingredients, such as real closures. And it's just getting to be time for another to dominate. Given the large base of apps and the evolution of the JVM, I'm thinking something running on it has got the best chance (sorry, Rubyists).

Groovy's the most popular JVM alternative to Java to date, but it is, at its heart, a scripting language more than anything, good for that and for building some simple web applications. Not sure it fits as a server-side language, certainly not as the foundation for your platform or enterprise (if there is such a thing anymore).

So maybe I'll need to embrace my inner obfuscation one day and become a serious Scaladite (Scalapal-a? Scal-lad? Too sexist. Scalavocate?). Meanwhile, I'm holding out for something else. Of course, I wouldn't want my house to burn down.

Random Thoughts, Trivial Drivel

Fowler's Fakes
Martin serves some food for thought around testing strategies when dealing with remote services. Not super filing but a tasty snack. I'm rarely disappointed when swinging by Fowler's restaurant for some brain food.

What is the Value you are Delivering?
This is a great read. Velocity is not Value. Sometimes I feel as though most people have completely forgotten why they are delivering something. It's not to get done on time or get done within budget or to crank your iteration velocity up. Somebody (hopefully, somebody) has the expectation that what you deliver will provide value to its consumer. If it doesn't, all of the other metrics are meaningless (well, they still have meaning, but only as a cautionary tale).

Wicket Smart
I started playing around with Wicket over the weekend. I like it quite a bit. Love the component-based nature and clean separation of concerns. Especially now with Wicket 1.4: Java 5 based, richly typed, and not backward compatible. And thank the gods of good sense over the demons of appeasement for breaking backward compatibility, just a little bit, in this particular case - hey, don't like it? They'll give you your money back and you can go crawl down your Java 1.4 wormhole back into 2002 :-).

That long black cloud is coming down, I feel like I'm knocking on heaven's door

I tell ya it seems like the players and the playahs in the cloud computing and ADTAAS (Any Damn Thing As A Service) space are doubling every week. Like most viral trends (whether they be real game changers or mostly hype), this is producing some creative and useful capabilities but is also attracting the shills, hustlers and sharks.

Then there is the gray area in between. Lots of vendors are feverishly slapping "Cloud Ready" on top of their now dated SOA stickers (which were slapped on top of the Component Based, J2EE Compliant, Distributed and Web stickers). It's like the clearance items on the last day of a going out of business sale or your Ski Jacket on the last day of the season after an active year on the slopes.

Some software firms are simply tossing their packaged offerings onto a provider cloud and presto-change-o, check out their new As-A-Service capabilities! "Unlike [fill in competitors], we've built this from the ground up to be [fill in latest buzzwords]!" Indeed.

A wise man once said, "Ya know, a town with money is like a mule with a spinning wheel - He doesn't know how he got it and damned if he knows what to do with it." This prophet, of course, is none other than Lyle Lanley, selling the town of Springfield on the joy and profit of buying and operating their own monorail. And this parable is mentioned only partially in jest.

The Simpsons provide a lot of sage advice in their fractured fables. Some young writer named Conan O'Brien spun that particular cautionary monorail tale of the consequences of wanting to be like The Other Guy when he has something shiny and new.

The Simpson clan's Joy of Sect provides similar enlightenment on the dangers of simply following 'The Leader'. The 'Cloudists' and 'SAASians' of the world today remind me in some ways of Movementarians in this episode, managing to brainwash virtually an entire community with vague promises and subtle threats (well, not so subtle, perhaps).

Who needs the Bible or the Koran (or Dianetics) when you have the Simpsons?

I should be very clear that I think Cloud Computing and especially Infrastructure As A Service is much more a game changer than a lot of hot air. That's not even debatable: the game has already been changed by it for 1000s of companies who couldn't otherwise have afforded a web presence for their business (and in any case were most definitely priced out of handling their peaks and valleys of demand online). The idea that you don't have to buy or lease your own hardware and can scale up or down based on need (or on your available budget) can't be overstated. Well, it can be (and has been) overstated for fun and profit by the shills and shamwows I've been bemoaning, but it's a powerful thing nonetheless.

In the end, ya just gotta remember that not everything is meant to be cloud enabled or provided As-A-Service. I see what I thought were otherwise reasonable people trying to twist their square requirements into round fluffy white holes and star-shaped As-A-Service slots simply so they could have the industry analysts label them into the magical wavy quadrants with the most snap-crackle-pop, virtual-cloud-as-a-service being the snappiest right now.

And the cloud doesn't write your business applications for you. You still have to do that. They still probably have to abide by a set of business rules (which can include regulatory and security constraints that go beyond what at least many of the big cloud providers can offer today).

There are also technical as well as other constraints at play here. Bandwidth, distribution, data ownership/protection, legal worries. They all still apply. Lots of smart folks are working through most of these concerns and they are all in the end solvable if they haven't yet been. But they are solvable in the large, not necessarily solvable for you.

For instance, if you still need to talk to a system of yours that cannot be provisioned in the cloud, you'll probably need to at least ensure that the conversations are not lengthy or chatty, lest you surrender scalability in your pursuit of scalability (or marketability). If that's not possible due to the centralized nature of such a system, then you might have to hold off on your trip to the heavens (at least in this particular case).

Like anything, due diligence shouldn't be skipped in the mad rush to be relevant. Take a peak into each vendor's 'Forbidden Barn' before you eat what they're cooking.

Finally, check out others that have gone before you with a similar profile who might prove instructive. You'll want to see how the North Haverbrooks of the world are making out.

Git-y-up - chasing the version control flame at a leasurely pace

What Distributed Version Control System (DVCS) did you wake up with this morning? (And what were you thinking/drinking last night that put you in this awkward position?)

We've been moving all of our new development at work from CVS to Subversion (SVN) over the past year, following an industry trend in that direction. Mainly because CVS wasn't built from the ground up to be distributed and because we change/add source directory structures more than occasionally (new Java package, for instance). CVS doesn't deal with those directory structure changes easily, at least not outside of straight-up additions/deletions, because it only versions files (not always a bad thing, as I'll get into later). If you're somewhat-obscure-movie minded, you can think of it like the geographic difference between Elm and Main Street in Pleasantville - Elm is like CVS - it's shorter and only has houses.

And I'll stop-hyphenating-everything-to-make-up-words-now.

Of course we're following behind that CVS to Subversion industry trend by several years and are about to get lapped on the track as a good chunk of the open source community are filing for SVN divorce, largely falling into the arms of Git, though others have flown the coop with Mercurial and Bazaar.

Branching and Merging woes are the usual causes cited for this move away from Subversion and I understand why. Sure, SVN allows you to branch pretty much anything (because branches are literally just copies of your whole enchilada). It's the merging that kills ya. It's better than CVS in that you actually can branch and merge directory structures. And it's gotten easier with SVN 1.5 and 1.6 but the dreaded merge can still be very painful and time consuming endeavor.

Where Subversion takes a step back from CVSland, in my opinion, is in the way it represents the versioned data: an opaque-to-you-cause-its-binary database. Sorry, a break out of hyphenitis again. With CVS, your file is still a file (granted, it's all versions of your file with a bunch of metadata, but you can hand fix it if something goes wrong). If the SVN repo gets corrupt in any way, well - hopefully your backup is recent and you only lose a few hours.

Git doesn't have the Subversion/CVS concept of checking files and modules out from a repository. With Git you clone the master repository and from that point on, the module you're working on is also a full fledged Git repo (you can commit to it, you can branch and merge your files to your heart's content). I'm not here to sing the praises of Git - I don't know enough about it yet. And others are doing a fine job of tooting git's horn. I think more than git itself, the thing that gets the OSS community excited is the GitHub, a source code hosting / collaboration / cloudy kind of thing. SCM served up SAAS-style.

I know even less about Mircurial and Bazaar than I do Git. And of course there are a number of popular commercial products (Perforce, ClearCase, PVCS, etc.). Some better than others, all have their own special fleas. As that giant of philosophy, Gordon Gekko, once said, 'Pick the dogs with the least fleas.' There you go. Of course, one person's fleas are another person's flea circus. 'Buddy's learning.'

So we'll keep an eye on the exodus even as we continue to mush onward with Subversion. I hope the Subversion maintainers are taking some cues from these other DVCS offering as they plan and prioritize upcoming releases. Perhaps by the time folks start migrating off one of these "newbies", there will be room for us to climb aboard.