Almost every software project involves something new. This may be at a pure programming language level. But mostly I am thinking of APIs and libraries and frameworks and servers and applications. You've invested in a SOA offering like Oracle OSB, or an ECM system like IBM FileNet P8, or a web app framework like JSF, or an app framework like Spring, or some security implementation like Apache Shiro, or an app server like Glassfish, or bet the farm on writing your desktop apps with C# 4.x, or gone with NoSQL instead of SQL Server.
You made all those choices because you've done some research, you played with all of them on your own time, or you know people who use them.
But now you actually have to do X and Y and Z in a certain API furnished by a certain library that you chose, or configure security with that security implementation you picked along with that app server they picked, or customize certain features in the ECM system that everyone committed to (way too early), or build a certain integration with the ESB you picked out of a hat.
And you find out that Spring LDAP in conjunction with Apache CXF doesn't work as neatly as you thought for authorization. You find out that a lot of custom work is needed to implement those ECM features (not what the vendor docs suggested, is it?). You find out that everything is taking much longer with Oracle OSB than you thought: you are getting it, but your junior co-workers are struggling to make those few critical things work. You invest weeks in writing up most of your C# desktop app just to find out that it's going to be bloody difficult to code up a certain feature. You block on a certain set of methods in a library that turn out to be buggy - Defect #14156 in Jira.
Common theme: you assumed that things would work, or that they would be relatively easy to understand from documentation, or that since the library or application clearly supported feature X that there was no way it would not also do feature Y.
Bad mistake.
You usually commit to major software quite early in a project. Once committed it is very difficult to justify backing out of a choice. And quite frankly, you can almost always make something work eventually. The problem arises in not accounting for the unknowns.
Rule of thumb: if you have to do something with a library or framework or server or application, and you have never tried it before, estimate the initial potential effort at one-half work-week - 20 hours. This does not mean the finished implementation: this means the Proof of Technology (PoT).
This may seem extravagant. This translates roughly into 10 untried capabilities per person-month. I do not mean major capabilities either: it is not going to take you just 20 hours to figure out how to implement SAML for the first time. No, this twenty hours per is for unit tasks at a fine granularity...like already knowing Spring LDAP, and already being conversant with Apache CXF, and even knowing how to combine the two to implement LDAP authentication for a web service, but not knowing yet how to do authorization. If you have never implemented authorization with that combination, estimate the initial PoT at 20 hours.
Any given new fine-grained PoT task may take more than 20 hours - it might burn a week. That's balanced out by those that take 4 hours.
Any experienced developer has spent a week or two - a solid week or two - on one little thing that was supposed to work. But it had never been tried before. Who knew? Come to Google time, evidently nobody else had ever made it work either. Or even better, you find a project committer explaining shamefacedly in an obscure thread that yes, while the feature is documented, it doesn't work correctly...yet.
How to explain this extra time to the manager or to the customer? Well, here's the thing. Ideally this estimated PoT time is not more than 25% of your total implementation effort. If it is, then the entire nature of your project is different. You've now got an experiment. There is nothing wrong with this, but you need to communicate that information to your manager or customer as soon as possible.
How to estimate? How to locate these PoT points? This should not be that tough. Once you've got your detailed design, and you know what software you're using, work through the combinations and list off each fine-grained task. Make it as fine-grained as you can. Ask yourself the question: have I, or any team members, done exactly this before? If no, it's a PoT.
I hope this helps.
Tuesday, May 22, 2012
Sunday, March 18, 2012
More Notes on DCI
I have been really interested in Data Context Interaction (DCI) for some years now. Before I go any further, I will direct readers to a comment by Bill Venners in a discussion from 2009. Here are some key quotes from that:
If some of the above terminology is unfamiliar, Fowler has a decent explanation of anemic domain models. I do not completely agree with some of the discussion (including by extension some of what Eric Evans, who is quoted by Fowler, has to say) but the explanation of anemic domain models is good.
The idea that class-oriented OO has not been serving us well is not new. The DCI folks (Reenskaug, Coplien etc) certainly think so. What is class-oriented or class-based OO exactly?
For most programmers this question is most easily answered by explaining that there are other styles of OO programming that have no classes at all. I refer you to Wikipedia's article on prototype-based programming. These days it is a safe bet that many programmers have used at least one language that allows prototype-based OO: JavaScript. The fact that the JavaScript tends to be used as small snippets in web pages obscures the more general utility of this capability in the language itself. After all, JavaScript is a full-fledged language; it is not just for browsers.
Most programmers use OO languages that are fully class-based: Java, C#, C++, among others. We get objects by creating new instances of classes. All objects that are instantiated from the same class "template" obey the same contract for behaviour, and have the same invariants. Only the instance state can vary.
Where the problem arises is in determining where to place behaviour. Some behaviour clearly belongs in one domain class, and no other. It is reasonable, for example, that a Shape object can report its area and perimeter and other geometric facts. It is reasonable that a Matrix can calculate its own determinant or eigenvalues.
But what about business logic that involves several classes? Fowler and Evans tell us that the domain layer contains business rules, and that the domain objects represent business state. Their picture of the application or service layer is clearly such that "business logic that involves several classes" should be in the domain layer. OK, in which case what class do you put that in? Does class A get the method and it accepts parameters of type class B and class C? Or does class B get a method that accepts parameters of type class A and class C? You get the idea.
No, what happens almost inexorably is that programmers write a method that is supplied with parameters of type class A, class B and class C, and that method ends up in a "service" class (often a session bean in J2EE/Java EE). And a method of that sort is usually quite procedural.
If you like you could consider these objects as being in the domain layer. They are unlikely to have state per se, but if they do you could call it business state. But nevertheless these classes are merely libraries of procedural code; the logic simply happens to act on objects, but it is still procedural code. The end result is still what Fowler and Evans are warning against.
But in the absence of something like DCI, writing this procedural code is probably the most acceptable thing for a real-world programmer to do. In fact, if done well it actually captures use cases much better than trying to assign all business logic into domain class methods.
For me, DCI is a way of codifying this existing practice, and rather than demonizing it, giving it formalism and structure. For largely the wrong reasons - ignorance if nothing else - OO programmers naturally arrive at the notion of barely smart data (domain classes with minimal behaviour), and things that look like contexts (procedural classes that embody use case logic). Exactly the kind of thing that Fowler warns against.
What has been lacking is system. The ad hoc procedural code that programmers stumble into is rarely cohesive or well-organized or purposeful. It is not easy to find code in one location that applies to one use case. DCI sets this on a firmer footing.
I think it will be interesting to see how this all falls out. There really has not been much uptake for DCI, and the ideas have been around for about three years. Part of the problem is that DCI is quite difficult to do in Java, and is cumbersome and inelegant in C# and C++. It is straightforward in Scala, but few people use Scala. Ruby is a natural fit, and a lot of the applications that Ruby is used for are conducive to DCI, but the adoption rate has been slow. Other dynamic languages also support DCI without too much fuss, like Perl or JavaScript, but the adoption rate is also slow.
One problem - and I am not being facetious - is that the canonical example advanced by the DCI camp, which is the accounts+ money transfer example, is atrocious. It is strained and awkward and Godawful. No wonder a lot of programmers have been turned off DCI. Unfortunately it is still by far the #1 example that is to be found on the Web. I hope this changes.
I hope, that if this is the first mention of Data Context Interaction that you have run across, that it interests you in further reading. I think it is a valuable technique to study. Think through your past OO experiences with rich domain objects, and your problems in placing behaviour that really seems to be algorithmic rather than belonging to an object, and see if DCI cannot help you further along the road to solid, reliable, readable code.
DCI ... challenges a basic assumption inherent in the notion of an anemic domain model. The assumption is that an object has a class.and
DCI challenges the assumption that each object has one and only one class. The observation is that objects need different behavior in different contexts.This is really important, and it highlights a distinction that we need to draw between objects and class instances. We will talk about that more later.
If some of the above terminology is unfamiliar, Fowler has a decent explanation of anemic domain models. I do not completely agree with some of the discussion (including by extension some of what Eric Evans, who is quoted by Fowler, has to say) but the explanation of anemic domain models is good.
The idea that class-oriented OO has not been serving us well is not new. The DCI folks (Reenskaug, Coplien etc) certainly think so. What is class-oriented or class-based OO exactly?
For most programmers this question is most easily answered by explaining that there are other styles of OO programming that have no classes at all. I refer you to Wikipedia's article on prototype-based programming. These days it is a safe bet that many programmers have used at least one language that allows prototype-based OO: JavaScript. The fact that the JavaScript tends to be used as small snippets in web pages obscures the more general utility of this capability in the language itself. After all, JavaScript is a full-fledged language; it is not just for browsers.
Most programmers use OO languages that are fully class-based: Java, C#, C++, among others. We get objects by creating new instances of classes. All objects that are instantiated from the same class "template" obey the same contract for behaviour, and have the same invariants. Only the instance state can vary.
Where the problem arises is in determining where to place behaviour. Some behaviour clearly belongs in one domain class, and no other. It is reasonable, for example, that a Shape object can report its area and perimeter and other geometric facts. It is reasonable that a Matrix can calculate its own determinant or eigenvalues.
But what about business logic that involves several classes? Fowler and Evans tell us that the domain layer contains business rules, and that the domain objects represent business state. Their picture of the application or service layer is clearly such that "business logic that involves several classes" should be in the domain layer. OK, in which case what class do you put that in? Does class A get the method and it accepts parameters of type class B and class C? Or does class B get a method that accepts parameters of type class A and class C? You get the idea.
No, what happens almost inexorably is that programmers write a method that is supplied with parameters of type class A, class B and class C, and that method ends up in a "service" class (often a session bean in J2EE/Java EE). And a method of that sort is usually quite procedural.
If you like you could consider these objects as being in the domain layer. They are unlikely to have state per se, but if they do you could call it business state. But nevertheless these classes are merely libraries of procedural code; the logic simply happens to act on objects, but it is still procedural code. The end result is still what Fowler and Evans are warning against.
But in the absence of something like DCI, writing this procedural code is probably the most acceptable thing for a real-world programmer to do. In fact, if done well it actually captures use cases much better than trying to assign all business logic into domain class methods.
For me, DCI is a way of codifying this existing practice, and rather than demonizing it, giving it formalism and structure. For largely the wrong reasons - ignorance if nothing else - OO programmers naturally arrive at the notion of barely smart data (domain classes with minimal behaviour), and things that look like contexts (procedural classes that embody use case logic). Exactly the kind of thing that Fowler warns against.
What has been lacking is system. The ad hoc procedural code that programmers stumble into is rarely cohesive or well-organized or purposeful. It is not easy to find code in one location that applies to one use case. DCI sets this on a firmer footing.
I think it will be interesting to see how this all falls out. There really has not been much uptake for DCI, and the ideas have been around for about three years. Part of the problem is that DCI is quite difficult to do in Java, and is cumbersome and inelegant in C# and C++. It is straightforward in Scala, but few people use Scala. Ruby is a natural fit, and a lot of the applications that Ruby is used for are conducive to DCI, but the adoption rate has been slow. Other dynamic languages also support DCI without too much fuss, like Perl or JavaScript, but the adoption rate is also slow.
One problem - and I am not being facetious - is that the canonical example advanced by the DCI camp, which is the accounts+ money transfer example, is atrocious. It is strained and awkward and Godawful. No wonder a lot of programmers have been turned off DCI. Unfortunately it is still by far the #1 example that is to be found on the Web. I hope this changes.
I hope, that if this is the first mention of Data Context Interaction that you have run across, that it interests you in further reading. I think it is a valuable technique to study. Think through your past OO experiences with rich domain objects, and your problems in placing behaviour that really seems to be algorithmic rather than belonging to an object, and see if DCI cannot help you further along the road to solid, reliable, readable code.
Tuesday, January 31, 2012
The Cloud
I have entitled this post "The Cloud", but it is really not just about the cloud. It is more about how we seem to spend so much of our time inventing new ways to design, create and field substandard applications. People who actually have to write real applications for real businesses, or more often maintain applications that other people wrote, or interoperate with applications that other people wrote, realize that in 2012 we are actually no further ahead in writing quality software that does what the customer wants than we were decades ago, before the Internet even existed.
What is the point in enthusing about flavours of cloud, like IaaS or PaaS or SaaS, or pushing REST over SOAP, or using the latest and greatest SOA technology to redo attempts at EAI or B2B solutions, or jumping on the latest language or framework, when the core business logic - the code that actually really does something useful - is still inferior in quality?
It is not sexy or exciting to work on actual business logic. Create a project to work on NoSQL, or Big Data business analytics, and you have all sorts of people who want to work on the thing. If you throw in the latest languages and frameworks you will have no problems getting people to work on the application either...the technical trappings that is. But work on the actual business logic? Requirements, design, test plans, coding up the boilerplate, doing the hard work on the domain objects, designing the RDBMS schema? God forbid.
The IT industry needs people who spend years - many years - amassing expertise with dry-as-dust applications. The kinds of technologies that are niche and only meaningful to a handful of people. The industry needs software developers who spend so many years in an application domain that they know more about it than the majority of business practitioners. The industry needs software architects, analysts, designers, coders and testers who can go through exhausting but necessary rituals over and over again, for each new business problem, and deliver quality applications.
The key here is applications. The latest Big Buzz always seems to be about what languages will we write the applications in, what framework will we use, how will we deploy it, how will applications talk to each other...but much less of the conversation is about the actual applications. Sure, there is also always a buzz around the latest methodologies - iterative, agile, lean, unit testing, etc etc - but it is hard not to get the feeling that the application successes are because of high-quality teams that could have done a good job using waterfall if they had to, and not because of the methodologies.
Do not get me wrong. I love the new languages that appear every year. I love the attempts in existing languages to improve and stay relevant. Most of the new development methodologies have a lot to offer. There is much good stuff in SOA and the cloud technologies and NoSQL. The social space is important, and mobile is important. I could easily add to this list.
But what is still not receiving its due share of necessary attention is the hard work of writing solid business logic. And testing it. And designing the security for it. And maintaining it. And gathering the requirements for it. And writing good documentation for it. And spending hours upon hours of one's own time on professional development in learning, yes, dry-as-dust non-sexy non-buzzword technologies.
The fact is that all the hyped technologies are relevant to well under ten percent of all software developers. Ten percent is probably a very optimistic estimate; I would be surprised if more than 1 in 50 software developers is heavily involved in NoSQL or real cloud work or Big Data or the latest sexiest analytics work or in advanced mobile. The huge majority of us do the grunt work, and most of that is not done very well.
But I think we all know where we are headed. We used to have mediocre desktop software. We moved from there to mediocre web applications. Mobile has now provided us with thousands of mediocre portable apps we can put on our smartphones and tablets. And the cloud offers us the opportunity to host mediocre applications somewhere else other than on our own servers.
Spend some time down the road thinking about software engineering. Real honest-to-God Fred Brooks, Don Knuth, Steve McConnell, Trygve Reenskaug software engineering. Ask yourself when was the last time you participated in writing real applications that were solid and actually made a client happy. And ask yourself how you can make that happen more often.
What is the point in enthusing about flavours of cloud, like IaaS or PaaS or SaaS, or pushing REST over SOAP, or using the latest and greatest SOA technology to redo attempts at EAI or B2B solutions, or jumping on the latest language or framework, when the core business logic - the code that actually really does something useful - is still inferior in quality?
It is not sexy or exciting to work on actual business logic. Create a project to work on NoSQL, or Big Data business analytics, and you have all sorts of people who want to work on the thing. If you throw in the latest languages and frameworks you will have no problems getting people to work on the application either...the technical trappings that is. But work on the actual business logic? Requirements, design, test plans, coding up the boilerplate, doing the hard work on the domain objects, designing the RDBMS schema? God forbid.
The IT industry needs people who spend years - many years - amassing expertise with dry-as-dust applications. The kinds of technologies that are niche and only meaningful to a handful of people. The industry needs software developers who spend so many years in an application domain that they know more about it than the majority of business practitioners. The industry needs software architects, analysts, designers, coders and testers who can go through exhausting but necessary rituals over and over again, for each new business problem, and deliver quality applications.
The key here is applications. The latest Big Buzz always seems to be about what languages will we write the applications in, what framework will we use, how will we deploy it, how will applications talk to each other...but much less of the conversation is about the actual applications. Sure, there is also always a buzz around the latest methodologies - iterative, agile, lean, unit testing, etc etc - but it is hard not to get the feeling that the application successes are because of high-quality teams that could have done a good job using waterfall if they had to, and not because of the methodologies.
Do not get me wrong. I love the new languages that appear every year. I love the attempts in existing languages to improve and stay relevant. Most of the new development methodologies have a lot to offer. There is much good stuff in SOA and the cloud technologies and NoSQL. The social space is important, and mobile is important. I could easily add to this list.
But what is still not receiving its due share of necessary attention is the hard work of writing solid business logic. And testing it. And designing the security for it. And maintaining it. And gathering the requirements for it. And writing good documentation for it. And spending hours upon hours of one's own time on professional development in learning, yes, dry-as-dust non-sexy non-buzzword technologies.
The fact is that all the hyped technologies are relevant to well under ten percent of all software developers. Ten percent is probably a very optimistic estimate; I would be surprised if more than 1 in 50 software developers is heavily involved in NoSQL or real cloud work or Big Data or the latest sexiest analytics work or in advanced mobile. The huge majority of us do the grunt work, and most of that is not done very well.
But I think we all know where we are headed. We used to have mediocre desktop software. We moved from there to mediocre web applications. Mobile has now provided us with thousands of mediocre portable apps we can put on our smartphones and tablets. And the cloud offers us the opportunity to host mediocre applications somewhere else other than on our own servers.
Spend some time down the road thinking about software engineering. Real honest-to-God Fred Brooks, Don Knuth, Steve McConnell, Trygve Reenskaug software engineering. Ask yourself when was the last time you participated in writing real applications that were solid and actually made a client happy. And ask yourself how you can make that happen more often.
Monday, January 23, 2012
Where Good Interview Questions Come From
This is a perennial topic. What are good questions to ask in a technical interview? How much, if any, should you rely on a practical test, perhaps a take-home, to substitute for whiteboard or verbal answers to questions? How detailed should questions be? How do you tailor your questions to the experience level required of the advertised position?
Regardless of how I, or you, conduct your technical interviews, we probably do ask some questions. And I have chosen a few topics that I think deserve interview questions.
Static vs Dynamic Typing, Strong Typing vs Weak Typing, Type Safety
This is important stuff. Your first objective, in asking questions related to these topics, is to detect any "deer in the headlights" expressions. It is absolutely OK for a candidate, unless they are interviewing for a senior position, to need some time to gather their thoughts about the subject of typing and type systems, and to visibly formulate answers. It is not OK for a candidate, at any level, to clearly not even know what you are talking about. Not with this stuff.
Short, key concepts that you hope the interviewee knows a bit about include:
Static typing: compile-time type checking;
Dynamic typing: run-time type checking; variables do not have types;
Strong typing, Weak typing, Type safety: it is fair to discuss these together. Type safety is about not permitting typed conversions or operations that lead to undefined or erroneous program conditions. Strong typing means placing restrictions on operations that intermix operands of different data types; weak typing allows implicit conversions or casts.
A free-wheeling discussion should allow the interviewee, perhaps to their own surprise, to advance the notion that a weakly typed language could in fact be type safe.
Threads, Concurrency and Parallelism
How much we expect from the interviewee here depends on their seniority. Certainly even the most junior programmer should know this much: that a thread is the smallest independent unit of work that can be scheduled by an OS. Generally a thread will live inside a process, and share the resources of this process with other threads.
The candidate should be aware, specifically, that if threads are used to implement concurrency, that the main thing to be concerned about is shared, mutable state. There is no need to synchronize threads if there is no shared, mutable state.
A more senior candidate should have something to say about the distinction between concurrent and parallel programs. Concurrency is about simultaneity - in other words, multiple and possibly related operations taking place at the same time. A web server that handles many different requests is an example of concurrency. Parallelism is about problem-splitting. Computing a histogram for an image by first computing the histograms for each quarter of the image (and computing those histograms by getting the histograms for their quarters, and so forth) is an example of parallelism. Concurrency is about handling many problems at the same time; a parallel program handles a single problem.
Any candidate who professes some knowledge of concurrent programming in a given language should be asked to describe shared state in that language. For example, a Java programmer should know that local variables are not shared, but that instance variables are.
A senior interviewee ought also to know something about various types of concurrency. These different forms vary in how multiple threads of execution communicate. Shared-state (e.g. Java), message-passing (e.g. Erlang), and declarative/dataflow concurrency are major types.
Here you are also looking for "deer in the headlights" expressions. It is OK for a junior, maybe even an intermediate, maybe even a senior, not to know every synchronization mechanism under the sun, especially depending upon their language exposure. But it is not OK for any of them to not know why we synchronize in the first place.
Data Structures and Algorithms
This subject can get pretty religious. I think I sort of land in the middle. I am the guy who believes it is quite unnecessary to know about the guts of a red-black tree, or to know the details of a dozen different sorting algorithms. On the other hand, I also believe that you should know that a red-black tree is a self-balancing binary search tree. On the subject of sorting I personally would be content if a candidate knew the names of half a dozen (or dozen) sorting algorithms, knew how to use the default sorting algorithm(s) in his working languages, and otherwise seemed to know how to do research.
I would expect pretty much any interviewee to be able to explain what a map (associative array, dictionary) is good for; I would not care if he did not know that efficient implementations often use hash tables or self-balancing binary search trees (see red-black tree above). You tell me how that knowledge is going to help a typical application programmer...I am sure I do not know the answer to that.
I do expect a programmer to know what their working languages supply. I have little tolerance for any programmer who cannot be bothered to investigate the library APIs for the languages that they use. This is a sin.
Pose questions that make an interviewee wrestle with the choice of what collection type to use to solve a problem.
Summary
These are examples. I hope they help.
Regardless of how I, or you, conduct your technical interviews, we probably do ask some questions. And I have chosen a few topics that I think deserve interview questions.
Static vs Dynamic Typing, Strong Typing vs Weak Typing, Type Safety
This is important stuff. Your first objective, in asking questions related to these topics, is to detect any "deer in the headlights" expressions. It is absolutely OK for a candidate, unless they are interviewing for a senior position, to need some time to gather their thoughts about the subject of typing and type systems, and to visibly formulate answers. It is not OK for a candidate, at any level, to clearly not even know what you are talking about. Not with this stuff.
Short, key concepts that you hope the interviewee knows a bit about include:
Static typing: compile-time type checking;
Dynamic typing: run-time type checking; variables do not have types;
Strong typing, Weak typing, Type safety: it is fair to discuss these together. Type safety is about not permitting typed conversions or operations that lead to undefined or erroneous program conditions. Strong typing means placing restrictions on operations that intermix operands of different data types; weak typing allows implicit conversions or casts.
A free-wheeling discussion should allow the interviewee, perhaps to their own surprise, to advance the notion that a weakly typed language could in fact be type safe.
Threads, Concurrency and Parallelism
How much we expect from the interviewee here depends on their seniority. Certainly even the most junior programmer should know this much: that a thread is the smallest independent unit of work that can be scheduled by an OS. Generally a thread will live inside a process, and share the resources of this process with other threads.
The candidate should be aware, specifically, that if threads are used to implement concurrency, that the main thing to be concerned about is shared, mutable state. There is no need to synchronize threads if there is no shared, mutable state.
A more senior candidate should have something to say about the distinction between concurrent and parallel programs. Concurrency is about simultaneity - in other words, multiple and possibly related operations taking place at the same time. A web server that handles many different requests is an example of concurrency. Parallelism is about problem-splitting. Computing a histogram for an image by first computing the histograms for each quarter of the image (and computing those histograms by getting the histograms for their quarters, and so forth) is an example of parallelism. Concurrency is about handling many problems at the same time; a parallel program handles a single problem.
Any candidate who professes some knowledge of concurrent programming in a given language should be asked to describe shared state in that language. For example, a Java programmer should know that local variables are not shared, but that instance variables are.
A senior interviewee ought also to know something about various types of concurrency. These different forms vary in how multiple threads of execution communicate. Shared-state (e.g. Java), message-passing (e.g. Erlang), and declarative/dataflow concurrency are major types.
Here you are also looking for "deer in the headlights" expressions. It is OK for a junior, maybe even an intermediate, maybe even a senior, not to know every synchronization mechanism under the sun, especially depending upon their language exposure. But it is not OK for any of them to not know why we synchronize in the first place.
Data Structures and Algorithms
This subject can get pretty religious. I think I sort of land in the middle. I am the guy who believes it is quite unnecessary to know about the guts of a red-black tree, or to know the details of a dozen different sorting algorithms. On the other hand, I also believe that you should know that a red-black tree is a self-balancing binary search tree. On the subject of sorting I personally would be content if a candidate knew the names of half a dozen (or dozen) sorting algorithms, knew how to use the default sorting algorithm(s) in his working languages, and otherwise seemed to know how to do research.
I would expect pretty much any interviewee to be able to explain what a map (associative array, dictionary) is good for; I would not care if he did not know that efficient implementations often use hash tables or self-balancing binary search trees (see red-black tree above). You tell me how that knowledge is going to help a typical application programmer...I am sure I do not know the answer to that.
I do expect a programmer to know what their working languages supply. I have little tolerance for any programmer who cannot be bothered to investigate the library APIs for the languages that they use. This is a sin.
Pose questions that make an interviewee wrestle with the choice of what collection type to use to solve a problem.
Summary
These are examples. I hope they help.
Thursday, September 29, 2011
CDI and Enterprise Archives
I would be the first one to admit that I do not yet have total mastery of Contexts and Dependency Injection (CDI) in Java EE 6. I understand it fairly well, I believe. But there has not yet been a chance for me to use it on a real job, since no clients we deal with have moved up to Java EE 6. So far it has just been experimental projects on Glassfish 3.1 or JBoss AS 6/7.
Having said that, it took me aback when I tried to inject (with @Inject) a session bean from an EJB JAR, in an EAR project, into a JSF managed bean. I used @Named, not @ManagedBean. The scope was CDI, and not JSF.
When I tried to invoke a JSF action on the managed bean, I kept on getting the standard NPE that we see if there is no beans.xml. In fact the wording was identical. It indicated that the JSF managed bean could not be located. Now, I certainly had a beans.xml - several in fact. And some experimentation with annotations revealed that CDI was in effect.
So why was it that CDI could not locate the JSF managed bean?
Turns out that it had everything to do with trying to inject an EJB from the EJB JAR, into the JSF managed bean in the WAR. In all of my early experiments I had used one monolithic WAR, that included the EJBs, so I never had this problem. And for the life of me I could not conceieve of a situation where you could not inject EJBs from an EJB JAR in the same EAR as the WAR.
Well, evidently you cannot. At least not in Glassfish.
I finally had to use the approach described here: inject-a-stateless-ejb-with-inject-into-cdi-weld-managedbean-jsf-1-2-ejb-application
I should note that while this link references JSF 1.2, I also had to do this for JSF 2.0. I should also note that the server exception was very misleading - it was the @Inject that was failing, not the @Named on the JSF managed bean.
Based on various problem reports:
WAS 8.0: FAILURES INJECTING EJBS WITH @INJECT AND BEANS FROM THE EAR's LIB DIRECTORY.
JBoss AS 7: CDI/EJB Injection in EAR Deployments
CDI Broken between EJB Module and JPA Utility JAR
it sure looks to me like all of this has not been resolved. It could be the specifications are clear on this, and I plan to do some intensive reading. But what good is that if the implementations are not getting it?
This article, JBoss 6 and Cluster-wide EJB Injection, sort of leaves the impression that we should not be having this kind of problem. Furthermore, the discussion in Chapter 8 of the Seam documentation, Producer Methods, leads me to believe that @Produces ought not to be required, since we already have an EJB. Needing a producer just to make an EJB from an EJB JAR available for injection into a WAR - in the same EAR - seems pretty clunky. I just wonder why I should not use @EJB instead.
Anyway, back to the books, I guess. There are still decent reasons why one might want an EAR with separate EJB JARs, and WARs, and JPA utility JARs, and what not. And I refuse to believe that the specs mandate that an awkward workaround has to be applied for EJBs that are to be injected, if they reside in a different module than the one that contains the injection point.
Having said that, it took me aback when I tried to inject (with @Inject) a session bean from an EJB JAR, in an EAR project, into a JSF managed bean. I used @Named, not @ManagedBean. The scope was CDI, and not JSF.
When I tried to invoke a JSF action on the managed bean, I kept on getting the standard NPE that we see if there is no beans.xml. In fact the wording was identical. It indicated that the JSF managed bean could not be located. Now, I certainly had a beans.xml - several in fact. And some experimentation with annotations revealed that CDI was in effect.
So why was it that CDI could not locate the JSF managed bean?
Turns out that it had everything to do with trying to inject an EJB from the EJB JAR, into the JSF managed bean in the WAR. In all of my early experiments I had used one monolithic WAR, that included the EJBs, so I never had this problem. And for the life of me I could not conceieve of a situation where you could not inject EJBs from an EJB JAR in the same EAR as the WAR.
Well, evidently you cannot. At least not in Glassfish.
I finally had to use the approach described here: inject-a-stateless-ejb-with-inject-into-cdi-weld-managedbean-jsf-1-2-ejb-application
I should note that while this link references JSF 1.2, I also had to do this for JSF 2.0. I should also note that the server exception was very misleading - it was the @Inject that was failing, not the @Named on the JSF managed bean.
Based on various problem reports:
WAS 8.0: FAILURES INJECTING EJBS WITH @INJECT AND BEANS FROM THE EAR's LIB DIRECTORY.
JBoss AS 7: CDI/EJB Injection in EAR Deployments
CDI Broken between EJB Module and JPA Utility JAR
it sure looks to me like all of this has not been resolved. It could be the specifications are clear on this, and I plan to do some intensive reading. But what good is that if the implementations are not getting it?
This article, JBoss 6 and Cluster-wide EJB Injection, sort of leaves the impression that we should not be having this kind of problem. Furthermore, the discussion in Chapter 8 of the Seam documentation, Producer Methods, leads me to believe that @Produces ought not to be required, since we already have an EJB. Needing a producer just to make an EJB from an EJB JAR available for injection into a WAR - in the same EAR - seems pretty clunky. I just wonder why I should not use @EJB instead.
Anyway, back to the books, I guess. There are still decent reasons why one might want an EAR with separate EJB JARs, and WARs, and JPA utility JARs, and what not. And I refuse to believe that the specs mandate that an awkward workaround has to be applied for EJBs that are to be injected, if they reside in a different module than the one that contains the injection point.
Saturday, September 17, 2011
Professional Development Redux
It is worth reading Why I Go Home: A Developer Dad’s Manifesto. There is a lot of truth to the observations that developer hours can be unnecessarily long, or be haphazard and unpredictable. And any push towards moving the North American mindset away from "live to work" towards "work to live" has a lot of resonance with me.
I will say this though. I get the feeling that Adam Schepis is atypical. He says he loves his career, and he says he loves crafting great code. I believe him. He maintains a blog with lots of technical content. And what is a key consideration for me, he wrote Drinking from a Firehose. In my opinion, anyone who cares enough about his software development career to be concerned about managing his professional reading, let alone what to read, does not have to defend his decision to try to keep his working hours sane, or to spend time with family. Because he is already doing what most career programmers do not do, which is to pursue professional development on their own time and own dime.
So let us talk about the typical programmer. You know, the man or woman you have to work with. You yourself probably do not fit the bill, because you are reading this.
Let us proceed on the assumption that that there is rarely any excuse for routine long working days, unless you are in your early twenties and working for a startup and you have willingly chosen to spend your life that way. There is no excuse for long days even in the waning weeks and days of a project - that is just piss-poor team planning and execution if and when it happens. That it happens a lot in our profession simply indicates that a lot of us, myself included, are occasionally (or often) piss-poor planners and executors.
So let us assume that we have solved all of those problems and are not in fact working frequent long days. With all due respect afforded to Adam, I suggest that if he is in such an environment, and he does not have the power to effect necessary changes, he needs to find another place. Because otherwise his decision to keep sane hours will hurt him. It is a commendable goal, but it can only happen in an organized workplace. So find that organized workplace first.
In this M-F 8-4 or (7-3 or 9-5) environment the question then becomes: what are your personal responsibilities for professional development?
I have worked in environments - government labour union environments - where some developers were able to safely refuse not only any personal professional development on their own time, but even personal professional development offered to them on taxpayer time. That is an aberration and perversion, and I will not discuss that further. What we are talking about is the typical situation, where your employer is paying you to be productive.
Most good employers, if possible, will pay for training and education. Most good employers will permit some of these activities to happen on paid time, particularly when the training and education involves employer-driven sharp changes in selected technologies to be mastered. If a company directive comes down to re-orient on PostgreSQL rather than Oracle, or that a big push is to be made into CMIS web services, it is not only unrealistic for the employer to expect all employees to ramp up on their own time, but it is also not in the employer's best interests.
But what about foundation technologies? These are "bread and butter" technologies. They consist of all those technologies that a developer in a certain niche (domain sector, seniority, work position etc) would reasonably be expected to know no matter what. If someone mostly deals with enterprise Java applications, that set of foundation technologies includes Java SE and Java EE. If someone deals with enterprise .NET applications, that set of foundation technologies includes .NET Framework, C#, ASP.NET MVC, WPF (maybe), WCF, and maybe WF (Workflow Foundation).
Is it reasonable to expect an employer to pay for developer training and education for moving to Java EE 6, when said same developer does nothing but Java EE 5 already? I have seen some programmers argue exactly this case, and while I believe that they are dead wrong, the argument does need to be refuted.
Before doing so, let us discuss informal on-the-job-training (OJT). By informal I mean just that: as a developer you encounter something new during paid hours, and with no further ado you teach yourself about it. Not one of us knows all we need to know, so we continually encounter unfamiliar things, like unfamiliar APIs. Some degree of informal OJT is expected, accepted and even encouraged.
But at some point informal OJT can be abused. If there is too much of it then it needs to be changed to either formal OJT, or the developer should be learning on their own. The main reason why too much informal OJT is problematic is because it skews and distorts project management: how can you ever hope to estimate effort, or trust your designs, when your developers evidently did not know all that much about the target technologies before they started?
As mentioned above, a good case for formal OJT is when the developers could not reasonably anticipate the need for the new technologies, but the employer requires the knowledge. After all, we cannot know everything all the time.
And what can a developer reasonably anticipate? This goes to the refutation I mentioned above. Well, this is not all that difficult to define. Basically, if a technology is foundational for your business sector then you can anticipate needing to know it. If a new hire is expected to know it, then you had best know it too, and not on the employer's dime either. Would it be acceptable for a job candidate seeking work at an enterprise Java shop in the year 2011 to say that they do not know anything about Java EE 6, which has been out for almost 2 years (since Dec 2009)? Well, no, it would not. So why is it OK for an established employee in the same shop to slide?
In fact it is not OK for the established employee to do that.
Ultimately it all boils down to common sense. Software development is a fast-moving profession where the majority of employers do try and meet us part-way on training and education issues. Note the part-way. This means that all of us - job candidates or established employees - have a responsibility to spend some of our own time keeping up. And it is not rocket science to figure out what you should be keeping up with on your own.
Please do not tell me that you have zero personal responsibilities in this regard. If it is a professional conversation, and you are my colleague or my employee, at that point you are baggage in my eyes. I am sorry, but you are a self-declared liability.
There are some software developer jobs where it takes a lot of personal professional development to keep up. In fact it can occupy so much time that you cannot pursue some other activities at the same time. This may include good parenting in extreme cases (although I have never seen any cases where this had to be so; other factors always caused the real problem). Fact of life. If this is so assess your priorities, like Adam has done. Make your choices, accept that there are consequences, and move on. If you have to change programming jobs do it. If you have to change careers, do that. But please do not tell me, or anyone else, that you have no personal responsibility to self-educate and self-train at all. Please. If you genuinely believe that, you should go on welfare.
What are reasonable rules of thumb for own time professional development? I am not talking about your under-25 caffeine-pounding needs-4-hours-of-sleep no-real-life coding phenom here, I am thinking about us regular people who have a passion for software but also have a passion for family, friends, riding a mountain bike, fishing, barbecuing, scuba, and playing golf. What is reasonable for us?
Here is my rule of thumb: fifty (50) hours per month. I actually exceed this by a lot, but I know why, and I have a reason for doing it. But I still do not skimp on my recreations and hobbies and relaxation; mainly it is that I am past parenting age. The fifty hours per month rule is for all you 25-45 types, parents or no. Here is how I arrive at the figure: one hour per day for reading. Read your blog articles or read your books, whatever. The other twenty hours is for personal coding projects - this is where you experiment. I happen to think this experimentation is essential.
This may seem like a lot of time, but it is very doable. We - all of us - can waste a lot of time each and every day. How much sleep do you need? Eight hours at the most, but usually seven will do. So we have got about 520 hours per month. Fifty hours is less than 10 percent of that. You think you do not have 10 percent wastage in your time use? Please - think again.
We have the time, and we have the obligation. Enough said.
I will say this though. I get the feeling that Adam Schepis is atypical. He says he loves his career, and he says he loves crafting great code. I believe him. He maintains a blog with lots of technical content. And what is a key consideration for me, he wrote Drinking from a Firehose. In my opinion, anyone who cares enough about his software development career to be concerned about managing his professional reading, let alone what to read, does not have to defend his decision to try to keep his working hours sane, or to spend time with family. Because he is already doing what most career programmers do not do, which is to pursue professional development on their own time and own dime.
So let us talk about the typical programmer. You know, the man or woman you have to work with. You yourself probably do not fit the bill, because you are reading this.
Let us proceed on the assumption that that there is rarely any excuse for routine long working days, unless you are in your early twenties and working for a startup and you have willingly chosen to spend your life that way. There is no excuse for long days even in the waning weeks and days of a project - that is just piss-poor team planning and execution if and when it happens. That it happens a lot in our profession simply indicates that a lot of us, myself included, are occasionally (or often) piss-poor planners and executors.
So let us assume that we have solved all of those problems and are not in fact working frequent long days. With all due respect afforded to Adam, I suggest that if he is in such an environment, and he does not have the power to effect necessary changes, he needs to find another place. Because otherwise his decision to keep sane hours will hurt him. It is a commendable goal, but it can only happen in an organized workplace. So find that organized workplace first.
In this M-F 8-4 or (7-3 or 9-5) environment the question then becomes: what are your personal responsibilities for professional development?
I have worked in environments - government labour union environments - where some developers were able to safely refuse not only any personal professional development on their own time, but even personal professional development offered to them on taxpayer time. That is an aberration and perversion, and I will not discuss that further. What we are talking about is the typical situation, where your employer is paying you to be productive.
Most good employers, if possible, will pay for training and education. Most good employers will permit some of these activities to happen on paid time, particularly when the training and education involves employer-driven sharp changes in selected technologies to be mastered. If a company directive comes down to re-orient on PostgreSQL rather than Oracle, or that a big push is to be made into CMIS web services, it is not only unrealistic for the employer to expect all employees to ramp up on their own time, but it is also not in the employer's best interests.
But what about foundation technologies? These are "bread and butter" technologies. They consist of all those technologies that a developer in a certain niche (domain sector, seniority, work position etc) would reasonably be expected to know no matter what. If someone mostly deals with enterprise Java applications, that set of foundation technologies includes Java SE and Java EE. If someone deals with enterprise .NET applications, that set of foundation technologies includes .NET Framework, C#, ASP.NET MVC, WPF (maybe), WCF, and maybe WF (Workflow Foundation).
Is it reasonable to expect an employer to pay for developer training and education for moving to Java EE 6, when said same developer does nothing but Java EE 5 already? I have seen some programmers argue exactly this case, and while I believe that they are dead wrong, the argument does need to be refuted.
Before doing so, let us discuss informal on-the-job-training (OJT). By informal I mean just that: as a developer you encounter something new during paid hours, and with no further ado you teach yourself about it. Not one of us knows all we need to know, so we continually encounter unfamiliar things, like unfamiliar APIs. Some degree of informal OJT is expected, accepted and even encouraged.
But at some point informal OJT can be abused. If there is too much of it then it needs to be changed to either formal OJT, or the developer should be learning on their own. The main reason why too much informal OJT is problematic is because it skews and distorts project management: how can you ever hope to estimate effort, or trust your designs, when your developers evidently did not know all that much about the target technologies before they started?
As mentioned above, a good case for formal OJT is when the developers could not reasonably anticipate the need for the new technologies, but the employer requires the knowledge. After all, we cannot know everything all the time.
And what can a developer reasonably anticipate? This goes to the refutation I mentioned above. Well, this is not all that difficult to define. Basically, if a technology is foundational for your business sector then you can anticipate needing to know it. If a new hire is expected to know it, then you had best know it too, and not on the employer's dime either. Would it be acceptable for a job candidate seeking work at an enterprise Java shop in the year 2011 to say that they do not know anything about Java EE 6, which has been out for almost 2 years (since Dec 2009)? Well, no, it would not. So why is it OK for an established employee in the same shop to slide?
In fact it is not OK for the established employee to do that.
Ultimately it all boils down to common sense. Software development is a fast-moving profession where the majority of employers do try and meet us part-way on training and education issues. Note the part-way. This means that all of us - job candidates or established employees - have a responsibility to spend some of our own time keeping up. And it is not rocket science to figure out what you should be keeping up with on your own.
Please do not tell me that you have zero personal responsibilities in this regard. If it is a professional conversation, and you are my colleague or my employee, at that point you are baggage in my eyes. I am sorry, but you are a self-declared liability.
There are some software developer jobs where it takes a lot of personal professional development to keep up. In fact it can occupy so much time that you cannot pursue some other activities at the same time. This may include good parenting in extreme cases (although I have never seen any cases where this had to be so; other factors always caused the real problem). Fact of life. If this is so assess your priorities, like Adam has done. Make your choices, accept that there are consequences, and move on. If you have to change programming jobs do it. If you have to change careers, do that. But please do not tell me, or anyone else, that you have no personal responsibility to self-educate and self-train at all. Please. If you genuinely believe that, you should go on welfare.
What are reasonable rules of thumb for own time professional development? I am not talking about your under-25 caffeine-pounding needs-4-hours-of-sleep no-real-life coding phenom here, I am thinking about us regular people who have a passion for software but also have a passion for family, friends, riding a mountain bike, fishing, barbecuing, scuba, and playing golf. What is reasonable for us?
Here is my rule of thumb: fifty (50) hours per month. I actually exceed this by a lot, but I know why, and I have a reason for doing it. But I still do not skimp on my recreations and hobbies and relaxation; mainly it is that I am past parenting age. The fifty hours per month rule is for all you 25-45 types, parents or no. Here is how I arrive at the figure: one hour per day for reading. Read your blog articles or read your books, whatever. The other twenty hours is for personal coding projects - this is where you experiment. I happen to think this experimentation is essential.
This may seem like a lot of time, but it is very doable. We - all of us - can waste a lot of time each and every day. How much sleep do you need? Eight hours at the most, but usually seven will do. So we have got about 520 hours per month. Fifty hours is less than 10 percent of that. You think you do not have 10 percent wastage in your time use? Please - think again.
We have the time, and we have the obligation. Enough said.
Friday, September 16, 2011
If You Think You Need A Better Web Framework...
...then you are identifying the wrong problem.
About two decades after my first exposure to programming (FORTRAN IV on punched cards) I started with the World Wide Web. I carefully crafted a simple HTML page with some <br /> and <i> and <h2> and <p> elements - you get the drift - and opened it as a file in NCSA Mosaic. I do not mind admitting that I was really chuffed. For the next few years after that I did not really program to the Web a whole bunch; when I did it was mostly C and Perl CGI.
Although PHP and ColdFusion emerged at about the same time, I did not use PHP in paid work until about 2006, and then only briefly. ColdFusion was actually my first reasonably high-powered web development language, and I will mention it again in a moment.
I started dabbling with Java servlets just about as soon as they became reasonably viable with the release of Servlet API 2.2 in 1999. Ever since then the portion of my work that has involved web applications has been about 75% Java EE, 20% ASP and ASP.NET, and 5% other (like Ruby on Rails and Scala Lift).
It is at this point that I will make a rather shit-disturbing claim, if you will pardon the language. Namely:
Decent programmers using Allaire ColdFusion (specifically the CFML markup) were as productive in writing useful web applications in the late 1990's as decent programmers are, using anything else, in the year 2011.
By decent I mean average or somewhat better than average, but not stellar.
I have a second assertion to make also, but I will lead into that by referring you to David Pollak's comments about Scala: Scala use is less good than Java use for..., and Yes, Virginia, Scala is hard. I happen to totally and unreservedly agree with everything David says in these two articles. I will supplement his observations by saying that most web application programmers do not have the chops, time or passion to leverage the best out of any language, framework or platform. For example, I think Java EE 6 kicks ass, and I also believe that most enterprise Java programmers will never get enough of the necessary nuances, idioms and sheer facts about the various Java SE and EE APIs, and core Java as of JDK 1.6/1.7, to be particularly good in that environment.
In effect I think you can extend David's argument to almost anything out there. Writing good Java is hard, and writing good Java EE is harder. It is true that writing good Scala is even harder, but why worry about that when most coders out there cannot even write decent C# or Java or Ruby or Python?
Having said all that, here is my second claim:
The specific web framework you choose to use in language X is largely irrelevant. It is irrelevant for the majority of web application programmers because they are only average at best, mediocre or terrible at worst, and so cannot take advantage of the features that make one framework better than another. It is irrelevant for great programmers because they can make pretty much anything work well...and anyway there are bigger problems to solve in application creation.
I mean, let us be realistic here. In my entire web application writing career I do not remember a single project ever succeeding or failing because of the underlying technology. I really, really do not. I have had experience of classic ASP and CGI applications - truly ugly things - that reliably solved their respective problems, for example. And I have had lots and lots of exposure to web applications that failed even with the latest and greatest web frameworks and the best application servers. Do not get me wrong - I can think of more than a few projects that would have failed if lessons learned either in prototyping, or in proof of technology (POT) work, or in early stage coding, had not been acted upon quickly and decisively, and often enough some technologies were discarded and replaced. My point is that, given due diligence and proper research and preparation and project management, I cannot think of any project that failed because of the final, carefully chosen technology stack.
And carefully chosen frequently means nothing more that that it is reliable, your team is reasonably familiar with it, and that there is good support for it. It does not have to mean that it is the best, not by a long shot. I still spend a fair chunk of time now in maintaining applications that selected neither the best language, nor the best libraries, nor the best frameworks, nor the best servers...but those choices were (and are) good enough. The applications themselves sink or swim based on sound software engineering.
The common theme here is that web applications - software applications period - succeed or fail based on tried and true software engineering principles. The various stakeholders talk to each other, people understand what the problem is, and people understand the solution. Let us not forget that all that web frameworks do is help us stitch together functionality - if the functional components are crap, or the designers and developers do not thoroughly understand the stitching (workflow), then it does not matter how great your web framework is.
Keep in mind that most software application teams do not do a great job at analysis or design. It may often be an OK job, but is not usually a great job. A very good framework cannot save mediocre analysis and design. Not only that, if the analysis and design is above average, then almost any framework will do. To reiterate:
The choice of web framework in language X is largely irrelevant.
Do not get me wrong - most of us have our pet frameworks in any given language we use. But that is all those choices are: pet choices.
The next time you interview someone to help with implementation of a web application, ask about MVC or CDI, and various types of controllers, and lifecycles and scopes, and application security, and various types of dependency injection. Please do not nitpick about Struts 2 versus JSF 2 versus Spring MVC versus Wicket versus...well, you get the idea. Seriously, who cares?
Mind you, if someone makes a big deal out of how expert they are at JSF 2, say, it cannot hurt to ask them some really detailed questions. Just to see if they are full of it. But do not waste a lot of time on this.
The web framework you use is relevant to maybe 10 percent of your implementation effort, at most. If it is more you are skimping on something, or it is a toy application. So why is something that is fairly immaterial in the big scheme of things so often blown out of proportion? You get religious wars about Struts versus Spring versus JSF, and in the meantime half your developers do not know how to use JPA properly, do not understand concurrency at all, have never in their life written an actual servlet, their eyes glaze over when you mention coupling and cohesion, and many of them have never written an inner class in their lives. Even better, three quarters of your Java developers know only Java.
A final note: One of my first ColdFusion projects involved pumping HDML and WML out to cellphones, and HTML out to early PDAs (the first Palms and PocketPCs), with the application integrating with credit card payment. This was over a decade ago - nobody else was doing this at all. The application was reliable - totally rock-solid, maintainable, and performant. It was easy to extend and to modify. And I firmly believe that in the year 2011, with your pick of technology stack, that 8 or 9 out of 10 teams could still not do a better job in a comparable amount of time.
About two decades after my first exposure to programming (FORTRAN IV on punched cards) I started with the World Wide Web. I carefully crafted a simple HTML page with some <br /> and <i> and <h2> and <p> elements - you get the drift - and opened it as a file in NCSA Mosaic. I do not mind admitting that I was really chuffed. For the next few years after that I did not really program to the Web a whole bunch; when I did it was mostly C and Perl CGI.
Although PHP and ColdFusion emerged at about the same time, I did not use PHP in paid work until about 2006, and then only briefly. ColdFusion was actually my first reasonably high-powered web development language, and I will mention it again in a moment.
I started dabbling with Java servlets just about as soon as they became reasonably viable with the release of Servlet API 2.2 in 1999. Ever since then the portion of my work that has involved web applications has been about 75% Java EE, 20% ASP and ASP.NET, and 5% other (like Ruby on Rails and Scala Lift).
It is at this point that I will make a rather shit-disturbing claim, if you will pardon the language. Namely:
Decent programmers using Allaire ColdFusion (specifically the CFML markup) were as productive in writing useful web applications in the late 1990's as decent programmers are, using anything else, in the year 2011.
By decent I mean average or somewhat better than average, but not stellar.
I have a second assertion to make also, but I will lead into that by referring you to David Pollak's comments about Scala: Scala use is less good than Java use for..., and Yes, Virginia, Scala is hard. I happen to totally and unreservedly agree with everything David says in these two articles. I will supplement his observations by saying that most web application programmers do not have the chops, time or passion to leverage the best out of any language, framework or platform. For example, I think Java EE 6 kicks ass, and I also believe that most enterprise Java programmers will never get enough of the necessary nuances, idioms and sheer facts about the various Java SE and EE APIs, and core Java as of JDK 1.6/1.7, to be particularly good in that environment.
In effect I think you can extend David's argument to almost anything out there. Writing good Java is hard, and writing good Java EE is harder. It is true that writing good Scala is even harder, but why worry about that when most coders out there cannot even write decent C# or Java or Ruby or Python?
Having said all that, here is my second claim:
The specific web framework you choose to use in language X is largely irrelevant. It is irrelevant for the majority of web application programmers because they are only average at best, mediocre or terrible at worst, and so cannot take advantage of the features that make one framework better than another. It is irrelevant for great programmers because they can make pretty much anything work well...and anyway there are bigger problems to solve in application creation.
I mean, let us be realistic here. In my entire web application writing career I do not remember a single project ever succeeding or failing because of the underlying technology. I really, really do not. I have had experience of classic ASP and CGI applications - truly ugly things - that reliably solved their respective problems, for example. And I have had lots and lots of exposure to web applications that failed even with the latest and greatest web frameworks and the best application servers. Do not get me wrong - I can think of more than a few projects that would have failed if lessons learned either in prototyping, or in proof of technology (POT) work, or in early stage coding, had not been acted upon quickly and decisively, and often enough some technologies were discarded and replaced. My point is that, given due diligence and proper research and preparation and project management, I cannot think of any project that failed because of the final, carefully chosen technology stack.
And carefully chosen frequently means nothing more that that it is reliable, your team is reasonably familiar with it, and that there is good support for it. It does not have to mean that it is the best, not by a long shot. I still spend a fair chunk of time now in maintaining applications that selected neither the best language, nor the best libraries, nor the best frameworks, nor the best servers...but those choices were (and are) good enough. The applications themselves sink or swim based on sound software engineering.
The common theme here is that web applications - software applications period - succeed or fail based on tried and true software engineering principles. The various stakeholders talk to each other, people understand what the problem is, and people understand the solution. Let us not forget that all that web frameworks do is help us stitch together functionality - if the functional components are crap, or the designers and developers do not thoroughly understand the stitching (workflow), then it does not matter how great your web framework is.
Keep in mind that most software application teams do not do a great job at analysis or design. It may often be an OK job, but is not usually a great job. A very good framework cannot save mediocre analysis and design. Not only that, if the analysis and design is above average, then almost any framework will do. To reiterate:
The choice of web framework in language X is largely irrelevant.
Do not get me wrong - most of us have our pet frameworks in any given language we use. But that is all those choices are: pet choices.
The next time you interview someone to help with implementation of a web application, ask about MVC or CDI, and various types of controllers, and lifecycles and scopes, and application security, and various types of dependency injection. Please do not nitpick about Struts 2 versus JSF 2 versus Spring MVC versus Wicket versus...well, you get the idea. Seriously, who cares?
Mind you, if someone makes a big deal out of how expert they are at JSF 2, say, it cannot hurt to ask them some really detailed questions. Just to see if they are full of it. But do not waste a lot of time on this.
The web framework you use is relevant to maybe 10 percent of your implementation effort, at most. If it is more you are skimping on something, or it is a toy application. So why is something that is fairly immaterial in the big scheme of things so often blown out of proportion? You get religious wars about Struts versus Spring versus JSF, and in the meantime half your developers do not know how to use JPA properly, do not understand concurrency at all, have never in their life written an actual servlet, their eyes glaze over when you mention coupling and cohesion, and many of them have never written an inner class in their lives. Even better, three quarters of your Java developers know only Java.
A final note: One of my first ColdFusion projects involved pumping HDML and WML out to cellphones, and HTML out to early PDAs (the first Palms and PocketPCs), with the application integrating with credit card payment. This was over a decade ago - nobody else was doing this at all. The application was reliable - totally rock-solid, maintainable, and performant. It was easy to extend and to modify. And I firmly believe that in the year 2011, with your pick of technology stack, that 8 or 9 out of 10 teams could still not do a better job in a comparable amount of time.
Subscribe to:
Posts (Atom)