Tuesday, January 31, 2012

The Cloud

I have entitled this post "The Cloud", but it is really not just about the cloud. It is more about how we seem to spend so much of our time inventing new ways to design, create and field substandard applications. People who actually have to write real applications for real businesses, or more often maintain applications that other people wrote, or interoperate with applications that other people wrote, realize that in 2012 we are actually no further ahead in writing quality software that does what the customer wants than we were decades ago, before the Internet even existed.

What is the point in enthusing about flavours of cloud, like IaaS or PaaS or SaaS, or pushing REST over SOAP, or using the latest and greatest SOA technology to redo attempts at EAI or B2B solutions, or jumping on the latest language or framework, when the core business logic - the code that actually really does something useful - is still inferior in quality?

It is not sexy or exciting to work on actual business logic. Create a project to work on NoSQL, or Big Data business analytics, and you have all sorts of people who want to work on the thing. If you throw in the latest languages and frameworks you will have no problems getting people to work on the application either...the technical trappings that is. But work on the actual business logic? Requirements, design, test plans, coding up the boilerplate, doing the hard work on the domain objects, designing the RDBMS schema? God forbid.

The IT industry needs people who spend years - many years - amassing expertise with dry-as-dust applications. The kinds of technologies that are niche and only meaningful to a handful of people. The industry needs software developers who spend so many years in an application domain that they know more about it than the majority of business practitioners. The industry needs software architects, analysts, designers, coders and testers who can go through exhausting but necessary rituals over and over again, for each new business problem, and deliver quality applications.

The key here is applications. The latest Big Buzz always seems to be about what languages will we write the applications in, what framework will we use, how will we deploy it, how will applications talk to each other...but much less of the conversation is about the actual applications. Sure, there is also always a buzz around the latest methodologies - iterative, agile, lean, unit testing, etc etc - but it is hard not to get the feeling that the application successes are because of high-quality teams that could have done a good job using waterfall if they had to, and not because of the methodologies.

Do not get me wrong. I love the new languages that appear every year. I love the attempts in existing languages to improve and stay relevant. Most of the new development methodologies have a lot to offer. There is much good stuff in SOA and the cloud technologies and NoSQL. The social space is important, and mobile is important. I could easily add to this list.

But what is still not receiving its due share of necessary attention is the hard work of writing solid business logic. And testing it. And designing the security for it. And maintaining it. And gathering the requirements for it. And writing good documentation for it. And spending hours upon hours of one's own time on professional development in learning, yes, dry-as-dust non-sexy non-buzzword technologies.

The fact is that all the hyped technologies are relevant to well under ten percent of all software developers. Ten percent is probably a very optimistic estimate; I would be surprised if more than 1 in 50 software developers is heavily involved in NoSQL or real cloud work or Big Data or the latest sexiest analytics work or in advanced mobile. The huge majority of us do the grunt work, and most of that is not done very well.

But I think we all know where we are headed. We used to have mediocre desktop software. We moved from there to mediocre web applications. Mobile has now provided us with thousands of mediocre portable apps we can put on our smartphones and tablets. And the cloud offers us the opportunity to host mediocre applications somewhere else other than on our own servers.

Spend some time down the road thinking about software engineering. Real honest-to-God Fred Brooks, Don Knuth, Steve McConnell, Trygve Reenskaug software engineering. Ask yourself when was the last time you participated in writing real applications that were solid and actually made a client happy. And ask yourself how you can make that happen more often.

Monday, January 23, 2012

Where Good Interview Questions Come From

This is a perennial topic. What are good questions to ask in a technical interview? How much, if any, should you rely on a practical test, perhaps a take-home, to substitute for whiteboard or verbal answers to questions? How detailed should questions be? How do you tailor your questions to the experience level required of the advertised position?

Regardless of how I, or you, conduct your technical interviews, we probably do ask some questions. And I have chosen a few topics that I think deserve interview questions.

Static vs Dynamic Typing, Strong Typing vs Weak Typing, Type Safety

This is important stuff. Your first objective, in asking questions related to these topics, is to detect any "deer in the headlights" expressions. It is absolutely OK for a candidate, unless they are interviewing for a senior position, to need some time to gather their thoughts about the subject of typing and type systems, and to visibly formulate answers. It is not OK for a candidate, at any level, to clearly not even know what you are talking about. Not with this stuff.

Short, key concepts that you hope the interviewee knows a bit about include:

Static typing: compile-time type checking;
Dynamic typing: run-time type checking; variables do not have types;
Strong typing, Weak typing, Type safety: it is fair to discuss these together. Type safety is about not permitting typed conversions or operations that lead to undefined or erroneous program conditions. Strong typing means placing restrictions on operations that intermix operands of different data types; weak typing allows implicit conversions or casts.

A free-wheeling discussion should allow the interviewee, perhaps to their own surprise, to advance the notion that a weakly typed language could in fact be type safe.

Threads, Concurrency and Parallelism

How much we expect from the interviewee here depends on their seniority. Certainly even the most junior programmer should know this much: that a thread is the smallest independent unit of work that can be scheduled by an OS. Generally a thread will live inside a process, and share the resources of this process with other threads.

The candidate should be aware, specifically, that if threads are used to implement concurrency, that the main thing to be concerned about is shared, mutable state. There is no need to synchronize threads if there is no shared, mutable state.

A more senior candidate should have something to say about the distinction between concurrent and parallel programs. Concurrency is about simultaneity - in other words, multiple and possibly related operations taking place at the same time. A web server that handles many different requests is an example of concurrency. Parallelism is about problem-splitting. Computing a histogram for an image by first computing the histograms for each quarter of the image (and computing those histograms by getting the histograms for their quarters, and so forth) is an example of parallelism. Concurrency is about handling many problems at the same time; a parallel program handles a single problem.

Any candidate who professes some knowledge of concurrent programming in a given language should be asked to describe shared state in that language. For example, a Java programmer should know that local variables are not shared, but that instance variables are.

A senior interviewee ought also to know something about various types of concurrency. These different forms vary in how multiple threads of execution communicate. Shared-state (e.g. Java), message-passing (e.g. Erlang), and declarative/dataflow concurrency are major types.

Here you are also looking for "deer in the headlights" expressions. It is OK for a junior, maybe even an intermediate, maybe even a senior, not to know every synchronization mechanism under the sun, especially depending upon their language exposure. But it is not OK for any of them to not know why we synchronize in the first place.

Data Structures and Algorithms

This subject can get pretty religious. I think I sort of land in the middle. I am the guy who believes it is quite unnecessary to know about the guts of a red-black tree, or to know the details of a dozen different sorting algorithms. On the other hand, I also believe that you should know that a red-black tree is a self-balancing binary search tree. On the subject of sorting I personally would be content if a candidate knew the names of half a dozen (or dozen) sorting algorithms, knew how to use the default sorting algorithm(s) in his working languages, and otherwise seemed to know how to do research.

I would expect pretty much any interviewee to be able to explain what a map (associative array, dictionary) is good for; I would not care if he did not know that efficient implementations often use hash tables or self-balancing binary search trees (see red-black tree above). You tell me how that knowledge is going to help a typical application programmer...I am sure I do not know the answer to that.

I do expect a programmer to know what their working languages supply. I have little tolerance for any programmer who cannot be bothered to investigate the library APIs for the languages that they use. This is a sin.

Pose questions that make an interviewee wrestle with the choice of what collection type to use to solve a problem.

Summary

These are examples. I hope they help.