Thursday, September 29, 2011

CDI and Enterprise Archives

I would be the first one to admit that I do not yet have total mastery of Contexts and Dependency Injection (CDI) in Java EE 6. I understand it fairly well, I believe. But there has not yet been a chance for me to use it on a real job, since no clients we deal with have moved up to Java EE 6. So far it has just been experimental projects on Glassfish 3.1 or JBoss AS 6/7.

Having said that, it took me aback when I tried to inject (with @Inject) a session bean from an EJB JAR, in an EAR project, into a JSF managed bean. I used @Named, not @ManagedBean. The scope was CDI, and not JSF.

When I tried to invoke a JSF action on the managed bean, I kept on getting the standard NPE that we see if there is no beans.xml. In fact the wording was identical. It indicated that the JSF managed bean could not be located. Now, I certainly had a beans.xml - several in fact. And some experimentation with annotations revealed that CDI was in effect.

So why was it that CDI could not locate the JSF managed bean?

Turns out that it had everything to do with trying to inject an EJB from the EJB JAR, into the JSF managed bean in the WAR. In all of my early experiments I had used one monolithic WAR, that included the EJBs, so I never had this problem. And for the life of me I could not conceieve of a situation where you could not inject EJBs from an EJB JAR in the same EAR as the WAR.

Well, evidently you cannot. At least not in Glassfish.

I finally had to use the approach described here: inject-a-stateless-ejb-with-inject-into-cdi-weld-managedbean-jsf-1-2-ejb-application

I should note that while this link references JSF 1.2, I also had to do this for JSF 2.0. I should also note that the server exception was very misleading - it was the @Inject that was failing, not the @Named on the JSF managed bean.

Based on various problem reports:

JBoss AS 7: CDI/EJB Injection in EAR Deployments
CDI Broken between EJB Module and JPA Utility JAR

it sure looks to me like all of this has not been resolved. It could be the specifications are clear on this, and I plan to do some intensive reading. But what good is that if the implementations are not getting it?

This article, JBoss 6 and Cluster-wide EJB Injection, sort of leaves the impression that we should not be having this kind of problem. Furthermore, the discussion in Chapter 8 of the Seam documentation, Producer Methods, leads me to believe that @Produces ought not to be required, since we already have an EJB. Needing a producer just to make an EJB from an EJB JAR available for injection into a WAR - in the same EAR - seems pretty clunky. I just wonder why I should not use @EJB instead.

Anyway, back to the books, I guess. There are still decent reasons why one might want an EAR with separate EJB JARs, and WARs, and JPA utility JARs, and what not. And I refuse to believe that the specs mandate that an awkward workaround has to be applied for EJBs that are to be injected, if they reside in a different module than the one that contains the injection point.

Saturday, September 17, 2011

Professional Development Redux

It is worth reading Why I Go Home: A Developer Dad’s Manifesto. There is a lot of truth to the observations that developer hours can be unnecessarily long, or be haphazard and unpredictable. And any push towards moving the North American mindset away from "live to work" towards "work to live" has a lot of resonance with me.

I will say this though. I get the feeling that Adam Schepis is atypical. He says he loves his career, and he says he loves crafting great code. I believe him. He maintains a blog with lots of technical content. And what is a key consideration for me, he wrote Drinking from a Firehose. In my opinion, anyone who cares enough about his software development career to be concerned about managing his professional reading, let alone what to read, does not have to defend his decision to try to keep his working hours sane, or to spend time with family. Because he is already doing what most career programmers do not do, which is to pursue professional development on their own time and own dime.

So let us talk about the typical programmer. You know, the man or woman you have to work with. You yourself probably do not fit the bill, because you are reading this.

Let us proceed on the assumption that that there is rarely any excuse for routine long working days, unless you are in your early twenties and working for a startup and you have willingly chosen to spend your life that way. There is no excuse for long days even in the waning weeks and days of a project - that is just piss-poor team planning and execution if and when it happens. That it happens a lot in our profession simply indicates that a lot of us, myself included, are occasionally (or often) piss-poor planners and executors.

So let us assume that we have solved all of those problems and are not in fact working frequent long days. With all due respect afforded to Adam, I suggest that if he is in such an environment, and he does not have the power to effect necessary changes, he needs to find another place. Because otherwise his decision to keep sane hours will hurt him. It is a commendable goal, but it can only happen in an organized workplace. So find that organized workplace first.

In this M-F 8-4 or (7-3 or 9-5) environment the question then becomes: what are your personal responsibilities for professional development?

I have worked in environments - government labour union environments - where some developers were able to safely refuse not only any personal professional development on their own time, but even personal professional development offered to them on taxpayer time. That is an aberration and perversion, and I will not discuss that further. What we are talking about is the typical situation, where your employer is paying you to be productive.

Most good employers, if possible, will pay for training and education. Most good employers will permit some of these activities to happen on paid time, particularly when the training and education involves employer-driven sharp changes in selected technologies to be mastered. If a company directive comes down to re-orient on PostgreSQL rather than Oracle, or that a big push is to be made into CMIS web services, it is not only unrealistic for the employer to expect all employees to ramp up on their own time, but it is also not in the employer's best interests.

But what about foundation technologies? These are "bread and butter" technologies. They consist of all those technologies that a developer in a certain niche (domain sector, seniority, work position etc) would reasonably be expected to know no matter what. If someone mostly deals with enterprise Java applications, that set of foundation technologies includes Java SE and Java EE. If someone deals with enterprise .NET applications, that set of foundation technologies includes .NET Framework, C#, ASP.NET MVC, WPF (maybe), WCF, and maybe WF (Workflow Foundation).

Is it reasonable to expect an employer to pay for developer training and education for moving to Java EE 6, when said same developer does nothing but Java EE 5 already? I have seen some programmers argue exactly this case, and while I believe that they are dead wrong, the argument does need to be refuted.

Before doing so, let us discuss informal on-the-job-training (OJT). By informal I mean just that: as a developer you encounter something new during paid hours, and with no further ado you teach yourself about it. Not one of us knows all we need to know, so we continually encounter unfamiliar things, like unfamiliar APIs. Some degree of informal OJT is expected, accepted and even encouraged.

But at some point informal OJT can be abused. If there is too much of it then it needs to be changed to either formal OJT, or the developer should be learning on their own. The main reason why too much informal OJT is problematic is because it skews and distorts project management: how can you ever hope to estimate effort, or trust your designs, when your developers evidently did not know all that much about the target technologies before they started?

As mentioned above, a good case for formal OJT is when the developers could not reasonably anticipate the need for the new technologies, but the employer requires the knowledge. After all, we cannot know everything all the time.

And what can a developer reasonably anticipate? This goes to the refutation I mentioned above. Well, this is not all that difficult to define. Basically, if a technology is foundational for your business sector then you can anticipate needing to know it. If a new hire is expected to know it, then you had best know it too, and not on the employer's dime either. Would it be acceptable for a job candidate seeking work at an enterprise Java shop in the year 2011 to say that they do not know anything about Java EE 6, which has been out for almost 2 years (since Dec 2009)? Well, no, it would not. So why is it OK for an established employee in the same shop to slide?

In fact it is not OK for the established employee to do that.

Ultimately it all boils down to common sense. Software development is a fast-moving profession where the majority of employers do try and meet us part-way on training and education issues. Note the part-way. This means that all of us - job candidates or established employees - have a responsibility to spend some of our own time keeping up. And it is not rocket science to figure out what you should be keeping up with on your own.

Please do not tell me that you have zero personal responsibilities in this regard. If it is a professional conversation, and you are my colleague or my employee, at that point you are baggage in my eyes. I am sorry, but you are a self-declared liability.

There are some software developer jobs where it takes a lot of personal professional development to keep up. In fact it can occupy so much time that you cannot pursue some other activities at the same time. This may include good parenting in extreme cases (although I have never seen any cases where this had to be so; other factors always caused the real problem). Fact of life. If this is so assess your priorities, like Adam has done. Make your choices, accept that there are consequences, and move on. If you have to change programming jobs do it. If you have to change careers, do that. But please do not tell me, or anyone else, that you have no personal responsibility to self-educate and self-train at all. Please. If you genuinely believe that, you should go on welfare.

What are reasonable rules of thumb for own time professional development? I am not talking about your under-25 caffeine-pounding needs-4-hours-of-sleep no-real-life coding phenom here, I am thinking about us regular people who have a passion for software but also have a passion for family, friends, riding a mountain bike, fishing, barbecuing, scuba, and playing golf. What is reasonable for us?

Here is my rule of thumb: fifty (50) hours per month. I actually exceed this by a lot, but I know why, and I have a reason for doing it. But I still do not skimp on my recreations and hobbies and relaxation; mainly it is that I am past parenting age. The fifty hours per month rule is for all you 25-45 types, parents or no. Here is how I arrive at the figure: one hour per day for reading. Read your blog articles or read your books, whatever. The other twenty hours is for personal coding projects - this is where you experiment. I happen to think this experimentation is essential.

This may seem like a lot of time, but it is very doable. We - all of us - can waste a lot of time each and every day. How much sleep do you need? Eight hours at the most, but usually seven will do. So we have got about 520 hours per month. Fifty hours is less than 10 percent of that. You think you do not have 10 percent wastage in your time use? Please - think again.

We have the time, and we have the obligation. Enough said.

Friday, September 16, 2011

If You Think You Need A Better Web Framework...

...then you are identifying the wrong problem.

About two decades after my first exposure to programming (FORTRAN IV on punched cards) I started with the World Wide Web. I carefully crafted a simple HTML page with some <br /> and <i> and <h2> and <p> elements - you get the drift - and opened it as a file in NCSA Mosaic. I do not mind admitting that I was really chuffed. For the next few years after that I did not really program to the Web a whole bunch; when I did it was mostly C and Perl CGI.

Although PHP and ColdFusion emerged at about the same time, I did not use PHP in paid work until about 2006, and then only briefly. ColdFusion was actually my first reasonably high-powered web development language, and I will mention it again in a moment.

I started dabbling with Java servlets just about as soon as they became reasonably viable with the release of Servlet API 2.2 in 1999. Ever since then the portion of my work that has involved web applications has been about 75% Java EE, 20% ASP and ASP.NET, and 5% other (like Ruby on Rails and Scala Lift).

It is at this point that I will make a rather shit-disturbing claim, if you will pardon the language. Namely:

Decent programmers using Allaire ColdFusion (specifically the CFML markup) were as productive in writing useful web applications in the late 1990's as decent programmers are, using anything else, in the year 2011.

By decent I mean average or somewhat better than average, but not stellar.

I have a second assertion to make also, but I will lead into that by referring you to David Pollak's comments about Scala: Scala use is less good than Java use for..., and Yes, Virginia, Scala is hard. I happen to totally and unreservedly agree with everything David says in these two articles. I will supplement his observations by saying that most web application programmers do not have the chops, time or passion to leverage the best out of any language, framework or platform. For example, I think Java EE 6 kicks ass, and I also believe that most enterprise Java programmers will never get enough of the necessary nuances, idioms and sheer facts about the various Java SE and EE APIs, and core Java as of JDK 1.6/1.7, to be particularly good in that environment.

In effect I think you can extend David's argument to almost anything out there. Writing good Java is hard, and writing good Java EE is harder. It is true that writing good Scala is even harder, but why worry about that when most coders out there cannot even write decent C# or Java or Ruby or Python?

Having said all that, here is my second claim:

The specific web framework you choose to use in language X is largely irrelevant. It is irrelevant for the majority of web application programmers because they are only average at best, mediocre or terrible at worst, and so cannot take advantage of the features that make one framework better than another. It is irrelevant for great programmers because they can make pretty much anything work well...and anyway there are bigger problems to solve in application creation.

I mean, let us be realistic here. In my entire web application writing career I do not remember a single project ever succeeding or failing because of the underlying technology. I really, really do not. I have had experience of classic ASP and CGI applications - truly ugly things - that reliably solved their respective problems, for example. And I have had lots and lots of exposure to web applications that failed even with the latest and greatest web frameworks and the best application servers. Do not get me wrong - I can think of more than a few projects that would have failed if lessons learned either in prototyping, or in proof of technology (POT) work, or in early stage coding, had not been acted upon quickly and decisively, and often enough some technologies were discarded and replaced. My point is that, given due diligence and proper research and preparation and project management, I cannot think of any project that failed because of the final, carefully chosen technology stack.

And carefully chosen frequently means nothing more that that it is reliable, your team is reasonably familiar with it, and that there is good support for it. It does not have to mean that it is the best, not by a long shot. I still spend a fair chunk of time now in maintaining applications that selected neither the best language, nor the best libraries, nor the best frameworks, nor the best servers...but those choices were (and are) good enough. The applications themselves sink or swim based on sound software engineering.

The common theme here is that web applications - software applications period - succeed or fail based on tried and true software engineering principles. The various stakeholders talk to each other, people understand what the problem is, and people understand the solution. Let us not forget that all that web frameworks do is help us stitch together functionality - if the functional components are crap, or the designers and developers do not thoroughly understand the stitching (workflow), then it does not matter how great your web framework is.

Keep in mind that most software application teams do not do a great job at analysis or design. It may often be an OK job, but is not usually a great job. A very good framework cannot save mediocre analysis and design. Not only that, if the analysis and design is above average, then almost any framework will do. To reiterate:

The choice of web framework in language X is largely irrelevant.

Do not get me wrong - most of us have our pet frameworks in any given language we use. But that is all those choices are: pet choices.

The next time you interview someone to help with implementation of a web application, ask about MVC or CDI, and various types of controllers, and lifecycles and scopes, and application security, and various types of dependency injection. Please do not nitpick about Struts 2 versus JSF 2 versus Spring MVC versus Wicket versus...well, you get the idea. Seriously, who cares?

Mind you, if someone makes a big deal out of how expert they are at JSF 2, say, it cannot hurt to ask them some really detailed questions. Just to see if they are full of it. But do not waste a lot of time on this.

The web framework you use is relevant to maybe 10 percent of your implementation effort, at most. If it is more you are skimping on something, or it is a toy application. So why is something that is fairly immaterial in the big scheme of things so often blown out of proportion? You get religious wars about Struts versus Spring versus JSF, and in the meantime half your developers do not know how to use JPA properly, do not understand concurrency at all, have never in their life written an actual servlet, their eyes glaze over when you mention coupling and cohesion, and many of them have never written an inner class in their lives. Even better, three quarters of your Java developers know only Java.

A final note: One of my first ColdFusion projects involved pumping HDML and WML out to cellphones, and HTML out to early PDAs (the first Palms and PocketPCs), with the application integrating with credit card payment. This was over a decade ago - nobody else was doing this at all. The application was reliable - totally rock-solid, maintainable, and performant. It was easy to extend and to modify. And I firmly believe that in the year 2011, with your pick of technology stack, that 8 or 9 out of 10 teams could still not do a better job in a comparable amount of time.

Thursday, September 8, 2011

Cars and Income

This is a non-IT post...mostly. I have been interested for a long time in how the cost of car ownership has managed to consistently take a large share of earned income. Historically speaking. In contrast with many other depreciable assets, notably consumer electronics like personal computers, cameras, sound systems, video systems, and also the huge majority of appliances, or tools, just to name a few categories, the purchase and care of the personal automobile has somehow always managed to eat up roughly the same fraction of the median income.

One would almost think that the automobile industry has things planned out like this.

Before discussing reasons and factors, let us look at some rough numbers. I have simply pulled some data from the interesting website The People History. I will start with 1955 - it is pretty much the right timeframe for my Dad to have had his first car.

I have invented an average cost of new car / average yearly wages ratio, in percent, and call it CaPoI  = Car as Percentage of Income.

1955: average cost of new car $1,900, new house $11K, gallon of gas 23 cents. Average yearly wages $4,130, average monthly rent $87. CaPoI = 46%

1965: average cost of new car $2,650, new house $13,600, gallon of gas 31 cents. Average yearly wages $6,450, average monthly rent $118. CaPoI = 41%

1975:  average cost of new car $4,250, new house $39,300, gallon of gas 44 cents. Average yearly wages $14,100, average monthly rent $200. CaPoI = 30%

1985: average cost of new car $9K, new house $90K, gallon of gas $1.09. Average yearly wages $22,100, average monthly rent $375. CaPoI = 40%

1995: average cost of new car $15,500, new house $113K, gallon of gas $1.09 (yes, still). Average yearly wages $35,900, average monthly rent $550. CaPoI = 43%

2000:  average cost of new car $24,750, new house $134K, gallon of gas $1.26. Average yearly wages $40,340, average monthly rent $675. CaPoI = 60%

For very recent stats I'll go to other sources. For a 2010 median US household income, $54K is about right. This is all households, any number of income earners. For the average cost of a new car right now, the National Automobile Dealers Association estimates this at $28,400 for 2010.

So the CaPoI is about 53% for 2010.

To be more realistic, the People History site may have used genuine individual average income. Because of the increasing prevalence of two-income households, my 1985, 1995 and 2000 CaPoI figures should probably be somewhat lower. Nevertheless, the 2010 CaPoI is still 53 percent, so a counter-argument could be made.

Point being, with the exception of a (possibly spurious) ray of hope in 1975, the cost of ownership of a car has not come down in over half a century.

The usual explanation for all this is simply that we have got much better cars. Better engines, better electronics, better frames and bodies, and so forth. That is all well and good, but manufacturers in all areas are accomplishing similar things, and their prices are plummeting. Also, one thing that car manufacturers have not done is improve the longevity of their cars; despite popular conceptions to the contrary, this statistic may have not gotten worse, but it absolutely has not improved. These days, according to reliable sources, the average age of the US car is about 8 years, and vehicles tend not to last past 13 years.

Manufacturers and dealers tout not only engine performance and efficiency improvements, but safety features. Again, that is all well and good, but how much longer are they going to ride that particular gravy train? The numero uno safety feature is the good old safety belt, which was mandated for new cars just about the same time as our informal survey started, in 1955 or so. Airbags are another major safety value-add, but they have been around for a long time too. Good tires and proper inflation are significant for safety, but these are not directly affecting the price of a new car. Past all that, it is arguable as to whether we need to make an average passenger car survivable like a NASCAR hotrod. Let the consumer spend their money on proper maintenance and the occasional defensive driving course instead.

Although there is not incontrovertible proof in these numbers, and one would rather cynically have to believe that the car makers and dealers care about profit, the evidence does strongly suggest that the industry has identified the consumer pain point, and is using it to the maximum. Cars are now essentials, not luxuries. They support how we live. And the industry knows damned well that they have a captive market.

Best counter to this: maintain your car. Baby it. Make it last 20 years. Or longer. And watch the industry cry. Although they can always pressure politicians to outlaw really old cars. But for now, try and make your car last 20 years. Please.

Friday, July 29, 2011

Changing Business Needs

I just read What Happened to Software Engineering by Phil Japikse. It's a decent article as far as it goes, but on one point - a central point - it perpetuates an assumption that I believe is dead wrong. To quote:

"Yet a significant number of software projects were failing outright, and many more went significantly over budget and/or missed deadlines. This was due to several factors, but probably the most significant have been both the speed of change in software and hardware and the speed of change in business needs. These changes in the software industry would be similar to that of having brand new vehicles requiring a complete redesign of the roads they drive on about every 18 months."

Both emphasized assertions are incorrect. Speed of change in software and hardware? Who is forcing anyone to use the latest bleeding-edge hardware or software? Nobody - that's who. And in fact a large percentage of sizeable projects work with hardware and software that is changing slowly or not at all. Old or ancient servers, databases, browsers, languages and libraries have substantial or majority market share. It's actually frequently the case that problems - other types of problems - are caused by customers not adopting changes quickly enough.

Not once in 20+ years in the IT business have I, or any of my colleagues or professional acquaintances, encountered a situation where hardware and supporting software was a moving target during the course of a least not to the degree that it caused a problem for planning and design. I'm not saying it never happens, and for a really long-duration project (5 or more years) I can see it being a factor, but usually this is a non-issue.

More importantly, business needs proper do not actually change very much at all over the course of a typical project that lasts less than five years. We just think they do. What really happens is that we start the average project with missing, incomplete or incorrect requirements. That's it in a nutshell.

If we then - a few months or a few years into the project - discover that a requirement was wrong, or poorly understood, or some business analyst had dropped the ball and never asked the right questions, please don't make out like a business need changed. It didn't - it was always there.

Again, in 20+ years of working in IT, not once - and this is intentionally a strong statement - have I ever finished a project, and been able to point to a significant requirement that would not have been known at the beginning...provided that requirements analysis had been properly done after professional business analysis. A tardy answer that is due to a tardy question is not a changing business need, it's a business need that you failed to discover at the appropriate time.

Most businesses have processes that change slowly, and many don't change at all. As much as we IT professionals might wish otherwise, a typical business IT project consists of re-implementing existing, established processes into newer (but not constantly changing) technology. I've worked on business applications where the core business processes and workflows are decades, sometimes half a century or more, old. The business needs often do not change at all.

Let's be honest with ourselves. The reason waterfall so often fails, and the reason we invented agile, is because we suck at business and requirements analysis. Agile methodologies exist because neither we nor the customers are willing to do hard, upfront work, and because all of us are pretty poor at communication.

Blaming rapidly changing hardware and software, or rapidly changing business needs, is a serious cop-out. Next time you hear these arguments, take them apart. Ask some hard questions. Look at your own experiences. Don't get dazzled by the pretty but imperfect physical analogies.

Sunday, June 19, 2011

Do You Get Encapsulation?

Nobody else is watching you read this blog, so for the next 15 or 30 minutes, do not Google, and do not reach for any CS books you've got handy. Use your memory and common sense, and let's see how much you really know about encapsulation.

First off, can you provide a decent explanation of the term? If you (somewhat dubiously) assert that it describes the bundling of data that describes the state of an object, along with methods (behaviour) that operate on that data, you get a grade of C. You are not wrong, but you have not completed the definition.

Let's feel our way towards a better definition of encapsulation. If I told you that accessors (setters and getters) sometimes break encapsulation, but not always, does that help? If I told you that default constructors are possibly a sign that encapsulation is broken, does that throw the switch?

Let's think about object state. We want the state of an object to always be valid and consistent. This means that a new object must start out that way (constructors), and stay that way over all operations (methods) done upon it. We have got a mechanism for ensuring that this is so, and you have likely now guessed what we call that mechanism.

Encapsulation is how we maintain valid and consistent object state.

Understanding this definition explains why a default constructor often breaks encapsulation. For starters, it may simply be available because the coder didn't bother to make it unavailable, or if having left it in, did not have it set valid member values that satisfy class invariants. A default constructor may sometimes be perfectly OK (e.g. there are no data members), but often it is not.

Understanding the definition of encapsulation also explains why sometimes setters and getters break encapsulation, and why sometimes they don't. There have been many blogs written where a pundit claims that any accessor breaks encapsulation. Well, no, that is incorrect. These same pundits opine that public accessors to a member are tantamount to making the member public. Well, no - that is also incorrect.

I should point out that I am using a restrictive definition of encapsulation. Many, if not most, definitions of the term bundle it together with the definition of information hiding. The two are frequently used interchangeably. Information hiding is usually defined as the separation of the contractual interface and its corresponding implementation. The purpose of information hiding is to permit implementation changes without affecting calling code. Because encapsulation is a usual technique for information hiding, the concepts get blurred.

One should also be aware, when reading about information hiding and encapsulation, that different folks have different definitions. For example, some folks take my definition of information hiding above, and refer to it as encapsulation. I do not like this confusion much, and will summarize my preferred definitions with the following observations:

Encapsulation: data, and the operations on that data, belong in the same class. It stands to reason that a sane design has determined what data belongs together, otherwise all bets are off. Determine the class invariants, and ensure that no use of constructors or methods can violate those invariants.

Information Hiding: deny direct access to data members; use accessors. Hide internal structure of a class, and do not expose implementation details. Expose an interface that represents a conceptual abstraction, and keep the implementation private.

Encapsulation and Information Hiding: a design consideration that is relevant to both concepts - when defining the operations on class data, decide what actions objects of that class must perform, and what information they are required to share.

Encapsulation makes information hiding easier to do. We want both, but they are different concepts. I have found that it makes my life easier when I consider the two ideas separately, and don't lump them in under the one or the other term. YMMV.

Saturday, June 4, 2011

Procedural Objects

Consider one of the classic problems of object-oriented languages. Or to be precise, class-oriented languages, like Java, C++, Objective-C, C#, Smalltalk and Eiffel. Namely, you've got a use case. A use case is an algorithm: it says, with these objects we shall do certain things. You might have two or five or ten different classes, and dozens or hundreds of actual object instances, participating in this particular use case.

What object, or objects, describe the use case?

Before we continue, let's address some terminology problems. People make reference to business objects, domain objects, entities, and value objects, among other types. Unfortunately these terms mean different things to different people. This is an observable fact, it won't go away, and so we may as well assume that we cannot use these terms and still maintain clarity of discussion.

So I'll simply refer to objects (or classes, or instances of classes, or prototypes).

Object-oriented languages - class-oriented languages specifically - early on arrived at a picture of a system where use cases were split up amongst the methods belonging to classes. This is still a primary hallmark of object-oriented design. Furthermore, it is assumed that if the partitioning is done correctly that overall system behaviour will somehow emerge from all the object interactions.

This thinking becomes problematic quickly: a use case may involve dozens of methods in dozens of classes. This is difficult to reason about, both for original implementation and also for maintenance.

Point being, there is a lot of logic in an application that is procedural. It describes use cases, or algorithms. In the sense of classic objects - objects that have state, and whose behaviour is meant to operate only on that state - there is a great deal of logic that doesn't belong to classic objects.

A simple use case that illustrates this problem is a conventional document approval workflow. There is a Document object - its state might include a name (title), contents (a reference), security information, versioning, and its lifecycle status. There are also Actors - the initial author(s), one or more reviewers, one or more approvers, and recipients. We'll keep it simple and assume that the document is not a record. We'll also assume that external processes decide which people belong to which groups, for any given document.

A flowchart for a typical document approval workflow is moderately dense; you could diagram it on a standard letter-sized piece of paper and be able to read it at arm's length, but the diagram would certainly be busy.

So what methods in what classes are responsible for all this logic? Document? Absolutely not - the workflow will vary enormously by organization and timeframe and project, to mention just a few factors. Actors? No, because actors are external to the system by definition. Let's say that there were actor "stub" objects - the actions that these stubs would take are extremely simple. For example, if an actor stub existed for an Approver, there are aspects of a single approval task, like enforcing non-repudiation, or dealing with an unmet deadline, that are not its job. But neither are they responsibilities of the Document instance.

So in a typical C++ or Java or C# application - for that matter Python or Ruby - we end up doing a number of undesirable things:

  1. we shoehorn a bunch of state and behaviour into Document that doesn't belong there;
  2. we jam a bunch of state and behaviour into other classes in the application; any class that has the slightest relationship with document approval becomes a possible candidate for housing some of the logic;
  3. we create one or more procedure objects to handle the use case, but hold our noses while doing it.
The first two activities happen in 99 percent of OO apps. There are also lots of ways of (dubiously) justifying all this. One design smell is when people start talking about rich domain classes; this usually means that they are justifying, often unconsciously, the placement of logic in the wrong spots. In other words, they are breaking up use case logic and don't know where to put it.

There is actually nothing wrong with the first part of #3. It's the second part that is the problem - the fact that we have been conditioned to think that it is bad to have great chunks of procedural, imperative code in our OO program. To add insult to injury, methods that are too "imperative looking" are often derisively and incorrectly referred to as God methods, or the objects that contain them as God objects. The problem in fact is having dozens of small methods in inappropriate classes, accompanied by excessive coupling.

OO design patterns largely skirt this issue. They serve a different purpose. If anything, logic which is badly fragmented due to mistaken beliefs about OO design simply makes it more difficult to properly identify opportunities for application of patterns.

Some readers probably know where I am going with this: Data, Context and Interaction, or DCI. To use Domain Driven Design (DDD) language, the Data in DCI consists of Entities: objects that have state, and behaviour pertaining to that state, but no use-case behaviour.

A Context is an object with one or more methods, all of which relate to a use case (or several related use cases). The Context knows about Roles, and its methods both bind objects to roles and enact the use case logic.

Finally, the Interactions are what the Roles do. As dictated by the Context, data objects assume Roles for varying periods to execute use case logic.

From a design perspective, with DCI, it now becomes much easier to reason about what the code is doing for a given use case. The business logic is obviously still not in one spot, lexically, but all the code that comprises a use case is much easier to locate.

In subsequent posts we will examine implementations of a simple document approval workflow, using DCI, in C++, Scala and F#.

Wednesday, March 30, 2011

Your Friend and Mine - Mister Ted Dziuba

Just kidding. I only ever heard of the man a few days ago through CodeProject Daily News, so whatever Web 2.0 milieu this guy swims in, I'm not normally aware of it. Having read his one rant about how bad Mac OS X is, I was strangely fascinated, and read some more of Ted's blog posts. This force-feeding of Dziuba included diatribes about NoSQL, a somewhat incoherent critique of people who talk about scalability (food for thought: if you badmouth people who talk about scalability, are you not yourself talking about scalability?), and a potpourri of other posts from his blog archives.

Now, I don't mean to disrespect Ted. I don't know the man, and if I'm lucky I never will. But he is a poster child for a lot of things that are badly wrong with software development, and I mean to point those out. It may be illustrative. One thing I really don't care about is his profanity. For all I know it's shock value, and if it works it works - I doubt he swears at people like that face to face; he'd have less teeth and be in a wheelchair.

Point number one...and this is fairly significant...unless I miss my guess the fellow is on the youngish side of 30 years of age. Based on when he graduated school, maybe mid-20's. Gentle readers, that means that Master Ted is inexperienced. No ifs and buts. Furthermore, of the five different jobs he's had in as many years, he was briefly a webhead at Google (wonder why he left?), participated in the failed (and apparently not failed for the right reasons) experiment Persai/Pressflip, had the brass balls to think he knew enough about anything as a youngster to write tech columns (did he have the Register's Youth Perspectives editorial or what?), got in with another startup (Milo) so he could do the same thing that he blasts other people for (develop Web 2.0 basic CRUD-tech ad magnets), and is now a senior dude with the company that bought this startup (eBay), presumably so they have him on hand to fix the code.

Some of his blog posts crack me up. The implied seasoning of experience in dozens of turns of phrase, the purported vast knowledge of technology (after all, if you're going to comment on something then the reader supposes you've learned through trial and error), the venture into Joel Spolsky territory when he talks about how to interview, and the wise world-weariness evidenced by I Don't Code In My Free Time. Ted apparently has so much experience under his belt that he's already compiled a list of programming things he wishes he'd known earlier (like when? In high school?), he knows exactly why "engineers" (dude, you're not an engineer and neither are 95 percent of the script kiddies out there) hop jobs, and he presumes to know how to spot valuable engineers.

One wonderful quote from the esteemed Dziuba: "Any developer who has been around enough to accumulate valuable experience will have his personal collection of stories that have mad him rage. I have been burned by bugs in programming language implementations, bugs I call "coding slurs". I have gotten the shaft more times than I can count from pathological character set issues that make me want to run for Congress on the platform of requiring licenses before people are allowed to use computers."

Character set issues that burned him more times than he can count? Doing what? Writing a small handful of twinkie LAMP web apps? "Accumulate valuable experience?" Give it a few more years, Ted...then we can talk.

It's not about Ted Dziuba. He himself is irrelevant, except insofar as he's a useful symbol of a lot of what's wrong with the software development profession. Which, to put it bluntly, is that there's way too many superficial Web x.y types out there that are not professional. In a sane world - especially one starting from the premise that the script kiddies insist on being called engineers - this guy would have maybe just transitioned from being an apprentice to being a journeyman, and perhaps a decade or so down the road he'd have his first shot at becoming a master software engineer. A decade or more after that his peers might even allow that he's sort of senior.

Ted, my man, thank you. I mean that. You're a mouthpiece for all the problem children in the industry, and the first step in cleaning up the mess is awareness of the problem. You're helping...albeit maybe not like you expect.

Oh, and Ted? In Java subList Gotcha, you mention "Well it turns out that subList didn't do what I thought it did. I assumed that I just got a new List that contained the elements in the given range of the original." And "read the documentation." Well, yes, Ted, you do read the documentation when using an API. Leastways professionals do. But if you got bit by something this basic, with a fundamental Java interface, then I guess that points to a pattern of either novice or sloppy behaviour...or both. Given that you've declaimed that there are way more important things to do than professional development outside work hours, it looks like both.

Saturday, March 5, 2011

Collaboration Is Missing The Point

MIT Technology Review has started a new monthly topic in Business, entitled Collaboration Tools. They promise to examine why some tools work, and why others don't. They will also be looking at when and how collaboration is valuable.

Think about that last statement. I didn't see it phrased quite that way on Technology Review, but at least a few articles urge caution in the deployment and use of collaboration tools. So that's what they mean, that sometimes collaboration can be overrated. And that means that it's not always necessary, and sometimes collaboration is counterproductive.

A lot of technology companies and IT users are still mesmerized by the Kollaboration Kool-Aid. After all, how can it be a bad thing if as many people as possible are in touch, and always in touch? How can it be a bad thing if people are sharing knowledge 24/7?

Well, it can be a bad thing, and in my opinion often is. An IEEE Spectrum article entitled "Metcalfe's Law is Wrong" pretty much lays some real-world realities on the line. As the article surmises, a fundamental flaw behind Metcalfe's or Reed's laws is in the assignment of equal value to all connections. The authors argue that Zipf's law makes more sense. And intuitively it does - in my work environment, for any given problem I am working on, I can rank the value of collaborating with various people. To a greater or lesser degree only a few connections have significant value.

Let's go one further. It stands to reason that some connections actually have negative value - opening up a social, "collaborative" channel with a certain person may actually hamper my work.This may also depend very much on the nature of the work: everyone has had tasks from time to time where, quite frankly, everyone else is a nuisance and a hindrance to getting the job done.

This happens more often than the vendors of collaboration tools are willing to admit. To hear their user stories everyone benefits by intimately working together all the time. Not true: it's more likely the case that valuable collaboration is the exception, not the rule.

What knowledge workers really need is effective knowledge management (KM). The typical state of KM in enterprises is abysmal, and that's perhaps a subject for another blog. What I'd like to be able to do (and usually have to resort to Google for, in place of something truly effective) is to search for information inside the organization, and not have to disrupt other peoples' time to do it. Collaboration is disruptive, good KM is not.

Unrestricted, uncontrolled collaboration cannot work. If jamming a dozen people into a room without leadership, and letting them jabber at each other to try to figure out how best to solve a problem, had ever worked, then organizations would be doing that all the time. Oh wait...those are called meetings. Sorry.

Seriously, carefully controlled small doses of collaboration are fine. But here's a thought - we've had an excellent IT tool for that, for decades: it's called email. Switch off the email notification pop-ups, and simply ask that people on a team check their Inbox at least once an hour. Do you really think you need more than that?