Thursday, September 29, 2011

CDI and Enterprise Archives

I would be the first one to admit that I do not yet have total mastery of Contexts and Dependency Injection (CDI) in Java EE 6. I understand it fairly well, I believe. But there has not yet been a chance for me to use it on a real job, since no clients we deal with have moved up to Java EE 6. So far it has just been experimental projects on Glassfish 3.1 or JBoss AS 6/7.

Having said that, it took me aback when I tried to inject (with @Inject) a session bean from an EJB JAR, in an EAR project, into a JSF managed bean. I used @Named, not @ManagedBean. The scope was CDI, and not JSF.

When I tried to invoke a JSF action on the managed bean, I kept on getting the standard NPE that we see if there is no beans.xml. In fact the wording was identical. It indicated that the JSF managed bean could not be located. Now, I certainly had a beans.xml - several in fact. And some experimentation with annotations revealed that CDI was in effect.

So why was it that CDI could not locate the JSF managed bean?

Turns out that it had everything to do with trying to inject an EJB from the EJB JAR, into the JSF managed bean in the WAR. In all of my early experiments I had used one monolithic WAR, that included the EJBs, so I never had this problem. And for the life of me I could not conceieve of a situation where you could not inject EJBs from an EJB JAR in the same EAR as the WAR.

Well, evidently you cannot. At least not in Glassfish.

I finally had to use the approach described here: inject-a-stateless-ejb-with-inject-into-cdi-weld-managedbean-jsf-1-2-ejb-application

I should note that while this link references JSF 1.2, I also had to do this for JSF 2.0. I should also note that the server exception was very misleading - it was the @Inject that was failing, not the @Named on the JSF managed bean.

Based on various problem reports:

WAS 8.0: FAILURES INJECTING EJBS WITH @INJECT AND BEANS FROM THE EAR's LIB DIRECTORY.
JBoss AS 7: CDI/EJB Injection in EAR Deployments
CDI Broken between EJB Module and JPA Utility JAR

it sure looks to me like all of this has not been resolved. It could be the specifications are clear on this, and I plan to do some intensive reading. But what good is that if the implementations are not getting it?

This article, JBoss 6 and Cluster-wide EJB Injection, sort of leaves the impression that we should not be having this kind of problem. Furthermore, the discussion in Chapter 8 of the Seam documentation, Producer Methods, leads me to believe that @Produces ought not to be required, since we already have an EJB. Needing a producer just to make an EJB from an EJB JAR available for injection into a WAR - in the same EAR - seems pretty clunky. I just wonder why I should not use @EJB instead.

Anyway, back to the books, I guess. There are still decent reasons why one might want an EAR with separate EJB JARs, and WARs, and JPA utility JARs, and what not. And I refuse to believe that the specs mandate that an awkward workaround has to be applied for EJBs that are to be injected, if they reside in a different module than the one that contains the injection point.

Saturday, September 17, 2011

Professional Development Redux

It is worth reading Why I Go Home: A Developer Dad’s Manifesto. There is a lot of truth to the observations that developer hours can be unnecessarily long, or be haphazard and unpredictable. And any push towards moving the North American mindset away from "live to work" towards "work to live" has a lot of resonance with me.

I will say this though. I get the feeling that Adam Schepis is atypical. He says he loves his career, and he says he loves crafting great code. I believe him. He maintains a blog with lots of technical content. And what is a key consideration for me, he wrote Drinking from a Firehose. In my opinion, anyone who cares enough about his software development career to be concerned about managing his professional reading, let alone what to read, does not have to defend his decision to try to keep his working hours sane, or to spend time with family. Because he is already doing what most career programmers do not do, which is to pursue professional development on their own time and own dime.

So let us talk about the typical programmer. You know, the man or woman you have to work with. You yourself probably do not fit the bill, because you are reading this.

Let us proceed on the assumption that that there is rarely any excuse for routine long working days, unless you are in your early twenties and working for a startup and you have willingly chosen to spend your life that way. There is no excuse for long days even in the waning weeks and days of a project - that is just piss-poor team planning and execution if and when it happens. That it happens a lot in our profession simply indicates that a lot of us, myself included, are occasionally (or often) piss-poor planners and executors.

So let us assume that we have solved all of those problems and are not in fact working frequent long days. With all due respect afforded to Adam, I suggest that if he is in such an environment, and he does not have the power to effect necessary changes, he needs to find another place. Because otherwise his decision to keep sane hours will hurt him. It is a commendable goal, but it can only happen in an organized workplace. So find that organized workplace first.

In this M-F 8-4 or (7-3 or 9-5) environment the question then becomes: what are your personal responsibilities for professional development?

I have worked in environments - government labour union environments - where some developers were able to safely refuse not only any personal professional development on their own time, but even personal professional development offered to them on taxpayer time. That is an aberration and perversion, and I will not discuss that further. What we are talking about is the typical situation, where your employer is paying you to be productive.

Most good employers, if possible, will pay for training and education. Most good employers will permit some of these activities to happen on paid time, particularly when the training and education involves employer-driven sharp changes in selected technologies to be mastered. If a company directive comes down to re-orient on PostgreSQL rather than Oracle, or that a big push is to be made into CMIS web services, it is not only unrealistic for the employer to expect all employees to ramp up on their own time, but it is also not in the employer's best interests.

But what about foundation technologies? These are "bread and butter" technologies. They consist of all those technologies that a developer in a certain niche (domain sector, seniority, work position etc) would reasonably be expected to know no matter what. If someone mostly deals with enterprise Java applications, that set of foundation technologies includes Java SE and Java EE. If someone deals with enterprise .NET applications, that set of foundation technologies includes .NET Framework, C#, ASP.NET MVC, WPF (maybe), WCF, and maybe WF (Workflow Foundation).

Is it reasonable to expect an employer to pay for developer training and education for moving to Java EE 6, when said same developer does nothing but Java EE 5 already? I have seen some programmers argue exactly this case, and while I believe that they are dead wrong, the argument does need to be refuted.

Before doing so, let us discuss informal on-the-job-training (OJT). By informal I mean just that: as a developer you encounter something new during paid hours, and with no further ado you teach yourself about it. Not one of us knows all we need to know, so we continually encounter unfamiliar things, like unfamiliar APIs. Some degree of informal OJT is expected, accepted and even encouraged.

But at some point informal OJT can be abused. If there is too much of it then it needs to be changed to either formal OJT, or the developer should be learning on their own. The main reason why too much informal OJT is problematic is because it skews and distorts project management: how can you ever hope to estimate effort, or trust your designs, when your developers evidently did not know all that much about the target technologies before they started?

As mentioned above, a good case for formal OJT is when the developers could not reasonably anticipate the need for the new technologies, but the employer requires the knowledge. After all, we cannot know everything all the time.

And what can a developer reasonably anticipate? This goes to the refutation I mentioned above. Well, this is not all that difficult to define. Basically, if a technology is foundational for your business sector then you can anticipate needing to know it. If a new hire is expected to know it, then you had best know it too, and not on the employer's dime either. Would it be acceptable for a job candidate seeking work at an enterprise Java shop in the year 2011 to say that they do not know anything about Java EE 6, which has been out for almost 2 years (since Dec 2009)? Well, no, it would not. So why is it OK for an established employee in the same shop to slide?

In fact it is not OK for the established employee to do that.

Ultimately it all boils down to common sense. Software development is a fast-moving profession where the majority of employers do try and meet us part-way on training and education issues. Note the part-way. This means that all of us - job candidates or established employees - have a responsibility to spend some of our own time keeping up. And it is not rocket science to figure out what you should be keeping up with on your own.

Please do not tell me that you have zero personal responsibilities in this regard. If it is a professional conversation, and you are my colleague or my employee, at that point you are baggage in my eyes. I am sorry, but you are a self-declared liability.

There are some software developer jobs where it takes a lot of personal professional development to keep up. In fact it can occupy so much time that you cannot pursue some other activities at the same time. This may include good parenting in extreme cases (although I have never seen any cases where this had to be so; other factors always caused the real problem). Fact of life. If this is so assess your priorities, like Adam has done. Make your choices, accept that there are consequences, and move on. If you have to change programming jobs do it. If you have to change careers, do that. But please do not tell me, or anyone else, that you have no personal responsibility to self-educate and self-train at all. Please. If you genuinely believe that, you should go on welfare.

What are reasonable rules of thumb for own time professional development? I am not talking about your under-25 caffeine-pounding needs-4-hours-of-sleep no-real-life coding phenom here, I am thinking about us regular people who have a passion for software but also have a passion for family, friends, riding a mountain bike, fishing, barbecuing, scuba, and playing golf. What is reasonable for us?

Here is my rule of thumb: fifty (50) hours per month. I actually exceed this by a lot, but I know why, and I have a reason for doing it. But I still do not skimp on my recreations and hobbies and relaxation; mainly it is that I am past parenting age. The fifty hours per month rule is for all you 25-45 types, parents or no. Here is how I arrive at the figure: one hour per day for reading. Read your blog articles or read your books, whatever. The other twenty hours is for personal coding projects - this is where you experiment. I happen to think this experimentation is essential.

This may seem like a lot of time, but it is very doable. We - all of us - can waste a lot of time each and every day. How much sleep do you need? Eight hours at the most, but usually seven will do. So we have got about 520 hours per month. Fifty hours is less than 10 percent of that. You think you do not have 10 percent wastage in your time use? Please - think again.

We have the time, and we have the obligation. Enough said.

Friday, September 16, 2011

If You Think You Need A Better Web Framework...

...then you are identifying the wrong problem.

About two decades after my first exposure to programming (FORTRAN IV on punched cards) I started with the World Wide Web. I carefully crafted a simple HTML page with some <br /> and <i> and <h2> and <p> elements - you get the drift - and opened it as a file in NCSA Mosaic. I do not mind admitting that I was really chuffed. For the next few years after that I did not really program to the Web a whole bunch; when I did it was mostly C and Perl CGI.

Although PHP and ColdFusion emerged at about the same time, I did not use PHP in paid work until about 2006, and then only briefly. ColdFusion was actually my first reasonably high-powered web development language, and I will mention it again in a moment.

I started dabbling with Java servlets just about as soon as they became reasonably viable with the release of Servlet API 2.2 in 1999. Ever since then the portion of my work that has involved web applications has been about 75% Java EE, 20% ASP and ASP.NET, and 5% other (like Ruby on Rails and Scala Lift).

It is at this point that I will make a rather shit-disturbing claim, if you will pardon the language. Namely:

Decent programmers using Allaire ColdFusion (specifically the CFML markup) were as productive in writing useful web applications in the late 1990's as decent programmers are, using anything else, in the year 2011.

By decent I mean average or somewhat better than average, but not stellar.

I have a second assertion to make also, but I will lead into that by referring you to David Pollak's comments about Scala: Scala use is less good than Java use for..., and Yes, Virginia, Scala is hard. I happen to totally and unreservedly agree with everything David says in these two articles. I will supplement his observations by saying that most web application programmers do not have the chops, time or passion to leverage the best out of any language, framework or platform. For example, I think Java EE 6 kicks ass, and I also believe that most enterprise Java programmers will never get enough of the necessary nuances, idioms and sheer facts about the various Java SE and EE APIs, and core Java as of JDK 1.6/1.7, to be particularly good in that environment.

In effect I think you can extend David's argument to almost anything out there. Writing good Java is hard, and writing good Java EE is harder. It is true that writing good Scala is even harder, but why worry about that when most coders out there cannot even write decent C# or Java or Ruby or Python?

Having said all that, here is my second claim:

The specific web framework you choose to use in language X is largely irrelevant. It is irrelevant for the majority of web application programmers because they are only average at best, mediocre or terrible at worst, and so cannot take advantage of the features that make one framework better than another. It is irrelevant for great programmers because they can make pretty much anything work well...and anyway there are bigger problems to solve in application creation.

I mean, let us be realistic here. In my entire web application writing career I do not remember a single project ever succeeding or failing because of the underlying technology. I really, really do not. I have had experience of classic ASP and CGI applications - truly ugly things - that reliably solved their respective problems, for example. And I have had lots and lots of exposure to web applications that failed even with the latest and greatest web frameworks and the best application servers. Do not get me wrong - I can think of more than a few projects that would have failed if lessons learned either in prototyping, or in proof of technology (POT) work, or in early stage coding, had not been acted upon quickly and decisively, and often enough some technologies were discarded and replaced. My point is that, given due diligence and proper research and preparation and project management, I cannot think of any project that failed because of the final, carefully chosen technology stack.

And carefully chosen frequently means nothing more that that it is reliable, your team is reasonably familiar with it, and that there is good support for it. It does not have to mean that it is the best, not by a long shot. I still spend a fair chunk of time now in maintaining applications that selected neither the best language, nor the best libraries, nor the best frameworks, nor the best servers...but those choices were (and are) good enough. The applications themselves sink or swim based on sound software engineering.

The common theme here is that web applications - software applications period - succeed or fail based on tried and true software engineering principles. The various stakeholders talk to each other, people understand what the problem is, and people understand the solution. Let us not forget that all that web frameworks do is help us stitch together functionality - if the functional components are crap, or the designers and developers do not thoroughly understand the stitching (workflow), then it does not matter how great your web framework is.

Keep in mind that most software application teams do not do a great job at analysis or design. It may often be an OK job, but is not usually a great job. A very good framework cannot save mediocre analysis and design. Not only that, if the analysis and design is above average, then almost any framework will do. To reiterate:

The choice of web framework in language X is largely irrelevant.

Do not get me wrong - most of us have our pet frameworks in any given language we use. But that is all those choices are: pet choices.

The next time you interview someone to help with implementation of a web application, ask about MVC or CDI, and various types of controllers, and lifecycles and scopes, and application security, and various types of dependency injection. Please do not nitpick about Struts 2 versus JSF 2 versus Spring MVC versus Wicket versus...well, you get the idea. Seriously, who cares?

Mind you, if someone makes a big deal out of how expert they are at JSF 2, say, it cannot hurt to ask them some really detailed questions. Just to see if they are full of it. But do not waste a lot of time on this.

The web framework you use is relevant to maybe 10 percent of your implementation effort, at most. If it is more you are skimping on something, or it is a toy application. So why is something that is fairly immaterial in the big scheme of things so often blown out of proportion? You get religious wars about Struts versus Spring versus JSF, and in the meantime half your developers do not know how to use JPA properly, do not understand concurrency at all, have never in their life written an actual servlet, their eyes glaze over when you mention coupling and cohesion, and many of them have never written an inner class in their lives. Even better, three quarters of your Java developers know only Java.

A final note: One of my first ColdFusion projects involved pumping HDML and WML out to cellphones, and HTML out to early PDAs (the first Palms and PocketPCs), with the application integrating with credit card payment. This was over a decade ago - nobody else was doing this at all. The application was reliable - totally rock-solid, maintainable, and performant. It was easy to extend and to modify. And I firmly believe that in the year 2011, with your pick of technology stack, that 8 or 9 out of 10 teams could still not do a better job in a comparable amount of time.

Thursday, September 8, 2011

Cars and Income

This is a non-IT post...mostly. I have been interested for a long time in how the cost of car ownership has managed to consistently take a large share of earned income. Historically speaking. In contrast with many other depreciable assets, notably consumer electronics like personal computers, cameras, sound systems, video systems, and also the huge majority of appliances, or tools, just to name a few categories, the purchase and care of the personal automobile has somehow always managed to eat up roughly the same fraction of the median income.

One would almost think that the automobile industry has things planned out like this.

Before discussing reasons and factors, let us look at some rough numbers. I have simply pulled some data from the interesting website The People History. I will start with 1955 - it is pretty much the right timeframe for my Dad to have had his first car.

I have invented an average cost of new car / average yearly wages ratio, in percent, and call it CaPoI  = Car as Percentage of Income.

1955: average cost of new car $1,900, new house $11K, gallon of gas 23 cents. Average yearly wages $4,130, average monthly rent $87. CaPoI = 46%

1965: average cost of new car $2,650, new house $13,600, gallon of gas 31 cents. Average yearly wages $6,450, average monthly rent $118. CaPoI = 41%

1975:  average cost of new car $4,250, new house $39,300, gallon of gas 44 cents. Average yearly wages $14,100, average monthly rent $200. CaPoI = 30%

1985: average cost of new car $9K, new house $90K, gallon of gas $1.09. Average yearly wages $22,100, average monthly rent $375. CaPoI = 40%

1995: average cost of new car $15,500, new house $113K, gallon of gas $1.09 (yes, still). Average yearly wages $35,900, average monthly rent $550. CaPoI = 43%

2000:  average cost of new car $24,750, new house $134K, gallon of gas $1.26. Average yearly wages $40,340, average monthly rent $675. CaPoI = 60%

For very recent stats I'll go to other sources. For a 2010 median US household income, $54K is about right. This is all households, any number of income earners. For the average cost of a new car right now, the National Automobile Dealers Association estimates this at $28,400 for 2010.

So the CaPoI is about 53% for 2010.

To be more realistic, the People History site may have used genuine individual average income. Because of the increasing prevalence of two-income households, my 1985, 1995 and 2000 CaPoI figures should probably be somewhat lower. Nevertheless, the 2010 CaPoI is still 53 percent, so a counter-argument could be made.

Point being, with the exception of a (possibly spurious) ray of hope in 1975, the cost of ownership of a car has not come down in over half a century.

The usual explanation for all this is simply that we have got much better cars. Better engines, better electronics, better frames and bodies, and so forth. That is all well and good, but manufacturers in all areas are accomplishing similar things, and their prices are plummeting. Also, one thing that car manufacturers have not done is improve the longevity of their cars; despite popular conceptions to the contrary, this statistic may have not gotten worse, but it absolutely has not improved. These days, according to reliable sources, the average age of the US car is about 8 years, and vehicles tend not to last past 13 years.

Manufacturers and dealers tout not only engine performance and efficiency improvements, but safety features. Again, that is all well and good, but how much longer are they going to ride that particular gravy train? The numero uno safety feature is the good old safety belt, which was mandated for new cars just about the same time as our informal survey started, in 1955 or so. Airbags are another major safety value-add, but they have been around for a long time too. Good tires and proper inflation are significant for safety, but these are not directly affecting the price of a new car. Past all that, it is arguable as to whether we need to make an average passenger car survivable like a NASCAR hotrod. Let the consumer spend their money on proper maintenance and the occasional defensive driving course instead.

Although there is not incontrovertible proof in these numbers, and one would rather cynically have to believe that the car makers and dealers care about profit, the evidence does strongly suggest that the industry has identified the consumer pain point, and is using it to the maximum. Cars are now essentials, not luxuries. They support how we live. And the industry knows damned well that they have a captive market.

Best counter to this: maintain your car. Baby it. Make it last 20 years. Or longer. And watch the industry cry. Although they can always pressure politicians to outlaw really old cars. But for now, try and make your car last 20 years. Please.