Tuesday, May 22, 2012

Have I Actually Tried That Yet?

Almost every software project involves something new. This may be at a pure programming language level. But mostly I am thinking of APIs and libraries and frameworks and servers and applications. You've invested in a SOA offering like Oracle OSB, or an ECM system like IBM FileNet P8, or a web app framework like JSF, or an app framework like Spring, or some security implementation like Apache Shiro, or an app server like Glassfish, or bet the farm on writing your desktop apps with C# 4.x, or gone with NoSQL instead of SQL Server.

You made all those choices because you've done some research, you played with all of them on your own time, or you know people who use them.

But now you actually have to do X and Y and Z in a certain API furnished by a certain library that you chose, or configure security with that security implementation you picked along with that app server they picked, or customize certain features in the ECM system that everyone committed to (way too early), or build a certain integration with the ESB you picked out of a hat.

And you find out that Spring LDAP in conjunction with Apache CXF doesn't work as neatly as you thought for authorization. You find out that a lot of custom work is needed to implement those ECM features (not what the vendor docs suggested, is it?). You find out that everything is taking much longer with Oracle OSB than you thought: you are getting it, but your junior co-workers are struggling to make those few critical things work. You invest weeks in writing up most of your C# desktop app just to find out that it's going to be bloody difficult to code up a certain feature. You block on a certain set of methods in a library that turn out to be buggy - Defect #14156 in Jira.

Common theme: you assumed that things would work, or that they would be relatively easy to understand from documentation, or that since the library or application clearly supported feature X that there was no way it would not also do feature Y.

Bad mistake.

You usually commit to major software quite early in a project. Once committed it is very difficult to justify backing out of a choice. And quite frankly, you can almost always make something work eventually. The problem arises in not accounting for the unknowns.

Rule of thumb: if you have to do something with a library or framework or server or application, and you have never tried it before, estimate the initial potential effort at one-half work-week - 20 hours. This does not mean the finished implementation: this means the Proof of Technology (PoT).

This may seem extravagant. This translates roughly into 10 untried capabilities per person-month. I do not mean major capabilities either: it is not going to take you just 20 hours to figure out how to implement SAML for the first time. No, this twenty hours per is for unit tasks at a fine granularity...like already knowing Spring LDAP, and already being conversant with Apache CXF, and even knowing how to combine the two to implement LDAP authentication for a web service, but not knowing yet how to do authorization. If you have never implemented authorization with that combination, estimate the initial PoT at 20 hours.

Any given new fine-grained PoT task may take more than 20 hours - it might burn a week. That's balanced out by those that take 4 hours.

Any experienced developer has spent a week or two - a solid week or two - on one little thing that was supposed to work. But it had never been tried before. Who knew? Come to Google time, evidently nobody else had ever made it work either. Or even better, you find a project committer explaining shamefacedly in an obscure thread that yes, while the feature is documented, it doesn't work correctly...yet.

How to explain this extra time to the manager or to the customer? Well, here's the thing. Ideally this estimated PoT time is not more than 25% of your total implementation effort. If it is, then the entire nature of your project is different. You've now got an experiment. There is nothing wrong with this, but you need to communicate that information to your manager or customer as soon as possible.

How to estimate? How to locate these PoT points? This should not be that tough. Once you've got your detailed design, and you know what software you're using, work through the combinations and list off each fine-grained task. Make it as fine-grained as you can. Ask yourself the question: have I, or any team members, done exactly this before? If no, it's a PoT.

I hope this helps.

Sunday, March 18, 2012

More Notes on DCI

I have been really interested in Data Context Interaction (DCI) for some years now. Before I go any further, I will direct readers to a comment by Bill Venners in a discussion from 2009. Here are some key quotes from that:
DCI ... challenges a basic assumption inherent in the notion of an anemic domain model. The assumption is that an object has a class.
and
DCI challenges the assumption that each object has one and only one class. The observation is that objects need different behavior in different contexts.
This is really important, and it highlights a distinction that we need to draw between objects and class instances. We will talk about that more later.

If some of the above terminology is unfamiliar, Fowler has a decent explanation of anemic domain models. I do not completely agree with some of the discussion (including by extension some of what Eric Evans, who is quoted by Fowler, has to say) but the explanation of anemic domain models is good.

The idea that class-oriented OO has not been serving us well is not new. The DCI folks (Reenskaug, Coplien etc) certainly think so. What is class-oriented or class-based OO exactly?

For most programmers this question is most easily answered by explaining that there are other styles of OO programming that have no classes at all. I refer you to Wikipedia's article on prototype-based programming. These days it is a safe bet that many programmers have used at least one language that allows prototype-based OO: JavaScript. The fact that the JavaScript tends to be used as small snippets in web pages obscures the more general utility of this capability in the language itself. After all, JavaScript is a full-fledged language; it is not just for browsers.

Most programmers use OO languages that are fully class-based: Java, C#, C++, among others. We get objects by creating new instances of classes. All objects that are instantiated from the same class "template" obey the same contract for behaviour, and have the same invariants. Only the instance state can vary.

Where the problem arises is in determining where to place behaviour. Some behaviour clearly belongs in one domain class, and no other. It is reasonable, for example, that a Shape object can report its area and perimeter and other geometric facts. It is reasonable that a Matrix can calculate its own determinant or eigenvalues.

But what about business logic that involves several classes? Fowler and Evans tell us that the domain layer contains business rules, and that the domain objects represent business state. Their picture of the application or service layer is clearly such that "business logic that involves several classes" should be in the domain layer. OK, in which case what class do you put that in? Does class A get the method and it accepts parameters of type class B and class C? Or does class B get a method that accepts parameters of type class A and class C? You get the idea.

No, what happens almost inexorably is that programmers write a method that is supplied with parameters of type class A, class B and class C, and that method ends up in a "service" class (often a session bean in J2EE/Java EE). And a method of that sort is usually quite procedural.

If you like you could consider these objects as being in the domain layer. They are unlikely to have state per se, but if they do you could call it business state. But nevertheless these classes are merely libraries of procedural code; the logic simply happens to act on objects, but it is still procedural code. The end result is still what Fowler and Evans are warning against.

But in the absence of something like DCI, writing this procedural code is probably the most acceptable thing for a real-world programmer to do. In fact, if done well it actually captures use cases much better than trying to assign all business logic into domain class methods.

For me, DCI is a way of codifying this existing practice, and rather than demonizing it, giving it formalism and structure. For largely the wrong reasons - ignorance if nothing else - OO programmers naturally arrive at the notion of barely smart data (domain classes with minimal behaviour), and things that look like contexts (procedural classes that embody use case logic). Exactly the kind of thing that Fowler warns against.

What has been lacking is system. The ad hoc procedural code that programmers stumble into is rarely cohesive or well-organized or purposeful. It is not easy to find code in one location that applies to one use case. DCI sets this on a firmer footing.


I think it will be interesting to see how this all falls out. There really has not been much uptake for DCI, and the ideas have been around for about three years. Part of the problem is that DCI is quite difficult to do in Java, and is cumbersome and inelegant in C# and C++. It is straightforward in Scala, but few people use Scala. Ruby is a natural fit, and a lot of the applications that Ruby is used for are conducive to DCI, but the adoption rate has been slow. Other dynamic languages also support DCI without too much fuss, like Perl or JavaScript, but the adoption rate is also slow.

One problem - and I am not being facetious - is that the canonical example advanced by the DCI camp, which is the accounts+ money transfer example, is atrocious. It is strained and awkward and Godawful. No wonder a lot of programmers have been turned off DCI. Unfortunately it is still by far the #1 example that is to be found on the Web. I hope this changes.

I hope, that if this is the first mention of Data Context Interaction that you have run across, that it interests you in further reading. I think it is a valuable technique to study. Think through your past OO experiences with rich domain objects, and your problems in placing behaviour that really seems to be algorithmic rather than belonging to an object, and see if DCI cannot help you further along the road to solid, reliable, readable code.

Tuesday, January 31, 2012

The Cloud

I have entitled this post "The Cloud", but it is really not just about the cloud. It is more about how we seem to spend so much of our time inventing new ways to design, create and field substandard applications. People who actually have to write real applications for real businesses, or more often maintain applications that other people wrote, or interoperate with applications that other people wrote, realize that in 2012 we are actually no further ahead in writing quality software that does what the customer wants than we were decades ago, before the Internet even existed.

What is the point in enthusing about flavours of cloud, like IaaS or PaaS or SaaS, or pushing REST over SOAP, or using the latest and greatest SOA technology to redo attempts at EAI or B2B solutions, or jumping on the latest language or framework, when the core business logic - the code that actually really does something useful - is still inferior in quality?

It is not sexy or exciting to work on actual business logic. Create a project to work on NoSQL, or Big Data business analytics, and you have all sorts of people who want to work on the thing. If you throw in the latest languages and frameworks you will have no problems getting people to work on the application either...the technical trappings that is. But work on the actual business logic? Requirements, design, test plans, coding up the boilerplate, doing the hard work on the domain objects, designing the RDBMS schema? God forbid.

The IT industry needs people who spend years - many years - amassing expertise with dry-as-dust applications. The kinds of technologies that are niche and only meaningful to a handful of people. The industry needs software developers who spend so many years in an application domain that they know more about it than the majority of business practitioners. The industry needs software architects, analysts, designers, coders and testers who can go through exhausting but necessary rituals over and over again, for each new business problem, and deliver quality applications.

The key here is applications. The latest Big Buzz always seems to be about what languages will we write the applications in, what framework will we use, how will we deploy it, how will applications talk to each other...but much less of the conversation is about the actual applications. Sure, there is also always a buzz around the latest methodologies - iterative, agile, lean, unit testing, etc etc - but it is hard not to get the feeling that the application successes are because of high-quality teams that could have done a good job using waterfall if they had to, and not because of the methodologies.

Do not get me wrong. I love the new languages that appear every year. I love the attempts in existing languages to improve and stay relevant. Most of the new development methodologies have a lot to offer. There is much good stuff in SOA and the cloud technologies and NoSQL. The social space is important, and mobile is important. I could easily add to this list.

But what is still not receiving its due share of necessary attention is the hard work of writing solid business logic. And testing it. And designing the security for it. And maintaining it. And gathering the requirements for it. And writing good documentation for it. And spending hours upon hours of one's own time on professional development in learning, yes, dry-as-dust non-sexy non-buzzword technologies.

The fact is that all the hyped technologies are relevant to well under ten percent of all software developers. Ten percent is probably a very optimistic estimate; I would be surprised if more than 1 in 50 software developers is heavily involved in NoSQL or real cloud work or Big Data or the latest sexiest analytics work or in advanced mobile. The huge majority of us do the grunt work, and most of that is not done very well.

But I think we all know where we are headed. We used to have mediocre desktop software. We moved from there to mediocre web applications. Mobile has now provided us with thousands of mediocre portable apps we can put on our smartphones and tablets. And the cloud offers us the opportunity to host mediocre applications somewhere else other than on our own servers.

Spend some time down the road thinking about software engineering. Real honest-to-God Fred Brooks, Don Knuth, Steve McConnell, Trygve Reenskaug software engineering. Ask yourself when was the last time you participated in writing real applications that were solid and actually made a client happy. And ask yourself how you can make that happen more often.

Monday, January 23, 2012

Where Good Interview Questions Come From

This is a perennial topic. What are good questions to ask in a technical interview? How much, if any, should you rely on a practical test, perhaps a take-home, to substitute for whiteboard or verbal answers to questions? How detailed should questions be? How do you tailor your questions to the experience level required of the advertised position?

Regardless of how I, or you, conduct your technical interviews, we probably do ask some questions. And I have chosen a few topics that I think deserve interview questions.

Static vs Dynamic Typing, Strong Typing vs Weak Typing, Type Safety

This is important stuff. Your first objective, in asking questions related to these topics, is to detect any "deer in the headlights" expressions. It is absolutely OK for a candidate, unless they are interviewing for a senior position, to need some time to gather their thoughts about the subject of typing and type systems, and to visibly formulate answers. It is not OK for a candidate, at any level, to clearly not even know what you are talking about. Not with this stuff.

Short, key concepts that you hope the interviewee knows a bit about include:

Static typing: compile-time type checking;
Dynamic typing: run-time type checking; variables do not have types;
Strong typing, Weak typing, Type safety: it is fair to discuss these together. Type safety is about not permitting typed conversions or operations that lead to undefined or erroneous program conditions. Strong typing means placing restrictions on operations that intermix operands of different data types; weak typing allows implicit conversions or casts.

A free-wheeling discussion should allow the interviewee, perhaps to their own surprise, to advance the notion that a weakly typed language could in fact be type safe.

Threads, Concurrency and Parallelism

How much we expect from the interviewee here depends on their seniority. Certainly even the most junior programmer should know this much: that a thread is the smallest independent unit of work that can be scheduled by an OS. Generally a thread will live inside a process, and share the resources of this process with other threads.

The candidate should be aware, specifically, that if threads are used to implement concurrency, that the main thing to be concerned about is shared, mutable state. There is no need to synchronize threads if there is no shared, mutable state.

A more senior candidate should have something to say about the distinction between concurrent and parallel programs. Concurrency is about simultaneity - in other words, multiple and possibly related operations taking place at the same time. A web server that handles many different requests is an example of concurrency. Parallelism is about problem-splitting. Computing a histogram for an image by first computing the histograms for each quarter of the image (and computing those histograms by getting the histograms for their quarters, and so forth) is an example of parallelism. Concurrency is about handling many problems at the same time; a parallel program handles a single problem.

Any candidate who professes some knowledge of concurrent programming in a given language should be asked to describe shared state in that language. For example, a Java programmer should know that local variables are not shared, but that instance variables are.

A senior interviewee ought also to know something about various types of concurrency. These different forms vary in how multiple threads of execution communicate. Shared-state (e.g. Java), message-passing (e.g. Erlang), and declarative/dataflow concurrency are major types.

Here you are also looking for "deer in the headlights" expressions. It is OK for a junior, maybe even an intermediate, maybe even a senior, not to know every synchronization mechanism under the sun, especially depending upon their language exposure. But it is not OK for any of them to not know why we synchronize in the first place.

Data Structures and Algorithms

This subject can get pretty religious. I think I sort of land in the middle. I am the guy who believes it is quite unnecessary to know about the guts of a red-black tree, or to know the details of a dozen different sorting algorithms. On the other hand, I also believe that you should know that a red-black tree is a self-balancing binary search tree. On the subject of sorting I personally would be content if a candidate knew the names of half a dozen (or dozen) sorting algorithms, knew how to use the default sorting algorithm(s) in his working languages, and otherwise seemed to know how to do research.

I would expect pretty much any interviewee to be able to explain what a map (associative array, dictionary) is good for; I would not care if he did not know that efficient implementations often use hash tables or self-balancing binary search trees (see red-black tree above). You tell me how that knowledge is going to help a typical application programmer...I am sure I do not know the answer to that.

I do expect a programmer to know what their working languages supply. I have little tolerance for any programmer who cannot be bothered to investigate the library APIs for the languages that they use. This is a sin.

Pose questions that make an interviewee wrestle with the choice of what collection type to use to solve a problem.

Summary

These are examples. I hope they help.