Wednesday, September 21, 2016

Delphi Features I Avoid Using And Believe Need to be Gradually Eliminated from Codebases

My guest post from L.V. didn't seem to have enough Delphi specifics for one commenter, so I thought about it, and realized, that what L.V. is talking about is Practices (stuff people do), not features.

But there are features in Delphi that I think are either over-used, or used inappropriately, or used indiscriminately, or which should almost never be used, since better alternatives almost always will exist.  Time for that list. No humorous guest-posting persona for this post, sorry, just my straight opinions.

1. WITH statement

This one is hardly surprising to be in the list, as it's one of the most controversial features in the Delphi language. I believe that it is almost always better to use a local variable with a short name, creating an unambiguous and readable statement, instead of using a WITH.  A double with is much more confusing than a single WITH, but all WITH statements in application layer code should be eliminated, over time, as you perform other bug fix and feature work on a codebase.

2. DFM Inheritance

I don't mind having TApplicationForm  inherit non-visually from a TApplicationBaseForm that doesn't have a DFM associated with it, but I find that maintenance and ongoing development of forms making use of DFM inheritance is problematic.  There can be crazy issues, and it's very difficult to make changes to an upstream form and understand all the potential problems downstream. This is especially true when a set of form inheritances grows larger.     I have even forced non-visual-inheritance using an interposing class, and found that IDE stability, and ease of working with a codebase is improved.

3. Frames

The problems with frames and with DFM-inherited are overlapping, but Frames have the additional troubling property of being hard to make visually fit and look good.  You can't really assume that any change in the base frame control's original positions will be overridden or not, you just don't know. Trying to move anything around in a frame is an exercise in frustration.  I prefer to compose parts of complex UIs at runtime instead of at designtime.

4. Visual Binding

I have had nothing but trouble with Visual Binding.  It seems that putting complex webs of things into a design-time environment is not a net win for readability, clarity, and maintainability. I would rather read completely readable code, and not deal with bindings.  Probably there are some small uses for visual binding, but I have not found them. My philosophy is to avoid it. It's a cool feature, when it works.  But the end result is as much fun as a mega-form.

5. Untyped Parameters in User Procedures or Functions

Modern Pascal should be done with PByte rather than the old way of handling "void *" types (if you know C) in Pascal is the untyped var syntax. If possible, I prefer to use PByte which I consider much more modern way of working.  I believe the two are more or less equivalent in capabilities, and that Delphi still contains untyped var params for historical compatibility reasons, but unless I'm writing a TStream and must overload a method that already has this signature, I prefer not to introduce any more anachronisms like that in my code.

6. Classic Pascal File IO Procedures

Streams should have replaced the use of AssignFile, Reset, Rewrite, CloseFile.

7. Unnecessary Use of Pointer Types and Operations in Application Layer Code

In low level component code, with unit tests, perhaps sometimes, pointer types and operations will be used. To implement your own linked list of value types which are not already implicitly by reference, but in application layer (form, data module) code that most Delphi shops spend 90% of their time, introducing raw pointer operations into the application code is almost always going to make me require it to be changed, if I'm doing a code-review.   Delphi is a compiled "somewhat strongly typed" language, and I'm most happy with application layer code that does not peel away the safety that the type system gives me.

8. Use of ShortString Types with Length Delimiters, in or out of Records

Perhaps in the 1980s, a pascal file of a record type, with packed records made sense. These days, it's a defect in your code.  The problem is once such a pattern is in your code, it's very difficult to remove it.  So while an existing legacy application may contain a lot of code like that, I believe a "no more" rule has to be set up, and module by module, the unsafe and unportable stuff will have to be retired, replaced, or updated.  The amount of pain this kind of thing causes in real codebases that I have seen that used it, is hard to overstate.

9. Use of Assignable (Non Constant) Consts

A compiler flag {$J+} in Delphi allows constants to be overwritten. It should never be used.




Tuesday, September 13, 2016

Delphi Worst Practices, The Path to the Dark Side

Guest Post from L.V.




If you want to do the worst possible job at being a Delphi developer, and go from merely weak, to positively devastating, and you want to give your employer the greatest chance of failing completely, making your users hate your product, and going out of business, while exacting the maximum amount of pain and suffering on all around you, if you wish everyone to fear your approaching footsteps, and to be powerless to cross you, here are some startlingly effective worst practices to consider.

Many require very little effort from you, other than occasionally putting your foot down and insisting that certain things are sacred and can't be changed, or that everything is bad and must be changed immediately, no matter what the cost.   It is important that the team never sense that they have the collective ability to go around you, and reinstate optimizations that undo your careful work to make things worse.  A strict hierarchical authoritarian power structure is key to maintaining steady progress towards pessimization.

No matter how bad things are, you can always find a way to make things a little worse.   I can't claim to have invented any of these, and I believe all of these are extremely popular techniques in Delphi shops around the world, and so it seems there is great interest in doing as bad a job as possible.  If I can contribute something to the art, it will be in synthesizing all the techniques of all the pessimization masters who have come before.

Now that you have considered whether you want to go there or not, I will share my secrets.
Here is the path that leads to the dark side...

1. Ignore Lots of Exception Types in the Delphi IDE

The more exceptions you ignore, the less aware of your actual runtime behaviors you will be.  Encourage other developers to ignore exceptions.  Suppress the desire to know what is going on, and become as detached as possible from reality.   The optimum practice is to ignore only EAbort and exceptions similar to it, like the Indy disconnect exception.  So the pessimum practice is to disable break on exception forever, or to add a very large number of classes to the delphi Exception Ignore.  Also make very sure that you ignore access violations.

2. Raise lots of  Exceptions, even for things which you didn't need an exception for.

This one is great, because you will annoy all developers and get them to ignore certain exception types.  Old code that uses StrToInt that could have used StrToIntDef will eventually make users ignore all manner of exceptions.

3. Try...Ignore

This worst-practice (or anti-pattern) can cause you more grief than any other worst practice:

   try
      MaybeDoAllOrPartOfSomeThing;
   except
   end;

To be maximally evil, don't even write a comment. Make every reader guess why you felt that not even logging an exception, and not even trying to restrict your exception handling to a specific sane type of thing to catch and ignore (like EAbort).  Make them wonder what kind of  evil things lurk below there, and how much memory corruption is being silently hidden.  Dare them to remove this kludge of doom that you have imposed.

4. Make your debug builds unable to ever run with Range Checking on, Overflow checking on, even if a developer wants to use it for a while.


While it can be a best practice to ship your release builds with Range Checking, and Overflow Checking off, because the effects to your customer of some relatively benign thing blowing up on them in release, that you can't predict or prevent, it can be a remarkably effective worst practice to build a giant codebase where you don't bother to explicitly turn OFF range checking and overflow checking and I/O checking where it's KNOWN to be generating false positives.     In codebases where I can turn on Range Checking and Overflow checking in particular, in my Developer Machine Debug builds, I often find my effectiveness in finding bugs is increased many times.  Those who want to pessimize their entire team's work, will want to make using such powerful tools that can be used for good, out of reach.

Note that turning on Range Checking and Overflow checking in Release builds could be a form of pessimization, because it's hard to guarantee that they won't have unknown effects.  Most of all, changing these defaults to anything other than what you've always had them at, is injecting a massive amount of chaos, and good developers will often state that this should be avoided in release builds.   You might be able to inject this kind of random evil chaos without anyone noticing, if for example, you can arrange for builds to be done on your machine instead of on a build server.  

5.  Permit Privileged Behavior By Developers with God-Like Egos

Unlike self-organized Agile teams, where rules apply the same to everybody, make at least one person on your team a God Like Developer, who can do things that other developers are not allowed to do.   Ugly pointer hackery, and evil kludges are okay, if you're this guy, and totally unacceptable if it's anybody else.   To really fully pessimize your team and your codebase, let this guy randomly refactor anything he wants to without asking anybody else's permission.  These God-Like developers can review other people's code, but don't need their code reviewed, because they never make mistakes.


6. Don't Document Anything

This is one of the easiest ways to pessimize, it requires basically no effort from you, and all things having to do with software teams and processes, will generally tend to rot on their own.  It is consequently one of the most popular ways of pessimization.  Sometimes you will need to quote the Agile Manifesto or people will accuse you of having evil motives. Quoting the Agile Manifesto will get these people to shut up.

7. Argue About Indentation

By now things are bad, and significant developer attention will be focused on improving things, undoing your careful work of Pessimization. Instead of letting the team focus on fixing core engineering mistakes and technical debt, redirect the team to consider more carefully the effects of one indentation style over another, and various formatting issues, or comment block styles.

8. Magical Unicorn Build Process, and the Voldemort Build Process

I call these special non-reproducible builds "Magical Unicorn Builds" because it is entirely possible that the one PC where the builds occur is actually the only place in the universe where the code as it lives now on version control actually will build.  The secrets and accidents of the entire projects history live as non-annotated non-recorded bits of state on that PC. Contents of the Registry. Contents of various folders that contain component source code that is not kept in source control, and will naturally tend to be slightly different on various machines, and there will be no way to assure that known and controlled set of input data created a traceable end product.   Lists of Tools that are required for the product to build will not exist, we don't need no stinking documentation.  For bonus Pessimization points, the build should not be done via a build.cmd batch script or a CI tool like FinalBuilder, but should instead require a bunch of Arcane and Undocumented actions performed Manually by the High Priest of the Dark Art of Building the Product.  In such a build, we may in fact get all the way to the Voldemort Build.   The Voldemort build is a secret known only to one developer, who we will call Voldemort. Voldemort knows arcane and terrible things that would make you weep, which must never be written down, or shared at all.  Only Voldemort knows the ultimate price of his own power, and he is willing to take any action to protect his own interests.

If you do all of these things, you may be very near being as bad as it is possible to be, and may become a Dark Lord some day.  It will take some hard work, but I'm sure you can do it. Go get 'em, tiger.

Please share your own worst practices in the comment box.  Together, we can rule the Galaxy.




Tuesday, August 30, 2016

Nexus Quality Suite: Why Profiling and Checking Your Application for Leaks is Essential (Part 1 of a review of Nexus Quality Suite 1.60)

I've been using and also experimenting with Nexus Quality Suite on and off for the past 9 months and I've been meaning to write up a blog post about it.  The trouble with reviewing this software suite is that it contains so much stuff, I am aware that I can only skim the surface.  So I think I'll present it in small meaningful little task-oriented mini reviews.  Initially I was running the tools in this suite on an extremely large Delphi system.  While it's definitely useful for very large systems, I found it difficult to explain that usefulness using that large application.

So I've decided to keep my real world focus in reviewing this tool, but I'm picking a bit of my own personal code to profile and test.  I'm going to run Nexus Quality Suite's tools against a little application I first wrote in about 1996, that is in my toolkit of "system admin and developer-operations" tools.   Here's what it looks like:


It can ping any number of hosts from one to hundreds. When any of those hosts goes offline (does not respond to ICMP ping), or the DNS resolver stops resolving, this little tool can beep (for in office monitoring) or send an email (which can alert me even when I'm out of the office).   But this tool has always been slow, slow slow.    Since I add additional sleep time (configurable) between its runs, I've never worried about the performance of it, but I recently had a use for this tool again, so I dusted off the source code, added a few little things, and recompiled it in Delphi 10.1 Berlin. I even found a missed out "Unicode port" bug where I had forced a cast to AnsiString over a UnicodeString in a way that actually resulted in sending Unicode bytes into an ANSI Windows API. Bad Warren! No cookie for you!  My only excuse is that I wrote the code in question in 1996, in Delphi 2, and simply overlooked it when porting this code to Unicode Delphi.  Now back to my review...

Anyways, back to the performance profiling tools.  The latest version of Nexus Quality Suite 1.60 supports both 32 bit and 64 bit programs, but I would recommend profiling your 32 bit tools, as the 32 bit tools are probably easier to profile.   For those cases where you really want to profile 64 bit stuff now you can.   The NQS installer installs a group of items in your tools menu.   Be aware that certain Delphi versions have a bug, which has a workaround available, and that the installer for Nexus Quality Quite actually warns you about that. This is good customer service right here.   Good job, Nexus, and thanks Andreas Hausladen.

Here's the installer warning. I have XE4, XE8, and 10.1 Berlin on my computer right now, and this is what I saw:


After installation, here's the menu items. There are too many tools in here to cover them all in one review, but I'm going to quickly show one application run through two of the tools.


The first tool in this review is brand new, I think. The Block Timer application is a new profiler tool based on the other profiler tools, but with some new capabilities.   I asked support and was told that more documentation is coming soon. The Block Timer joins its partner the classic Method Timer in providing some pretty great time-based profiling capabilities for your Delphi applications.  Here is a summary of the features of the new block timer compared to the existing method timer and line timer profilers:


1. The block timer is thread aware, and can break down information into thread by thread values, whereas all times are combined for all threads in the other profilers.

2.  The block timer can accurately report information about time spent in recursive methods.

3. The extra overhead of doing all that extra profiler makes the overhead of running the profiler a bit higher.

4. No dynamic profiling in this one. You loose the trigger feature from the LT profiler, which is an important feature. It's worth switching to LT when you need triggers.

So far it seems to me that in smaller applications, with fewer procedures selected for profiling, the application overhead of the most intensive techniques (BT) produces the most interesting results. The larger the application, and the larger the cross section of the application methods I want to test, the more the classic lower-overhead MT and LT profilers are useful.

Configuring your application to work with this or other profiler tools is pretty consistent, the same steps are necessary for this tool, and for any other sampling profiler or other runtime analyzer tools. Turn on TD32 debug symbols from Project Options, in the Linker tab, in older versions, or Debug Information in the newer ones, according to the docs. 

Run the tool from the Tools menu.  Note that it's a good idea on Delphi XE through XE6 to do a full rebuild before you click the tools menu item as Delphi doesn't rebuild the target for you on those versions.

You click one tool, and the first time you do, you will probably want to do a bit of configuration. Each tool requires some slightly different configuration.  It is NOT a good idea in my opinion to profile ALL of any non-trivial application. First, because you're asking a lot of the NQS tool. Second because even if the tool can successfully gather information on 10 or 20 thousand methods, you probably can't do much with the results.   I recommend doing a little searching and probing and find some routines that matter, and include those.  The user interface is reminiscent of Outlook 2000 for most of the tools.  In the case of the Block Timer and Method Timer, you use the Routines icon, which for some few releases has included a nice Search feature, which I think I requested, and I'm gratified to see that in there.  Because my app is all about the Ping, I'm looking for the Ping methods, I want to know what they're up to...





After searching, then selecting the routines, I right click and "Enable Tracking for Selected" methods. Then I click the green triangle "play" icon to make my application-under-test start execution.   In a small application you could perhaps select everything.  But as I have learned from much experimentation, it's really better to spend a bit of time searching for methods you suspect to be relevant and enable a dozen or two dozen of those. Then drill in, and enable further layers of the code, as necessary, to get a clear picture of your system behavior.

After my program has executed long enough to get a reasonable sample, in my case, just over 5 minutes, I shut it down, and then the timing analysis results are shown:


You can also see a bit of a trend of CPU usage by your program, in total, which can be really interesting, because you might want to know "what is the program doing during these bursts of CPU activity?".



A nice feature built in is that if you have configured your source search path in the NQS project options, you can just double click on a line of interest and see the code:


If NQS tools don't show things in the font you wish, you can change the font it uses, there are individual selectable fonts, I change ALL of them to Consolas because it's the one true Code Editor font.  If you like the Raize font and you have that one around, you could pick that one.  Courier New is more to some other people's taste. If you happen to want Comic Sans, well, you're drunk, go home.



So now I want to jump from Tool to Insight.   The reason using tools like this is great, is when the insight clicks in your head. Today I just saw this line and I realized, ResolveAddress is a function, and because there's no mandatory parenthesis in Pascal method invocation, the code here looks like it's just a variable or property check, but it's actually a very expensive procedure.  Do I really need to repeat the Resolve on each ping or could my tool just periodically check that the DNS resolution is still working properly, and cache the resolve value, and do multiple ICMP pings to the IP address? In my case, I think I'm wasting a lot of cycles, and loading down my company or customer site's DNS service unnecessarily, and generating a bit of wasteful network traffic.  In my next version, even more than making my tool say 10% more CPU efficient, and 10% more network efficient, I might also make it a bit more configurable, say, let the user configure how often to check DNS resolve for my important host is working.


I also think I should write the code above, so that it's clear that the above is not just a check-value but actually that a function is invoked.   I really think I need to rewrite lots of internals in TICMP.

But what else could be wrong with my code other than it's wasteful? How about Memory Leaks.   So I am now going to switch to Code Watch.  Only a few minutes to try it out, and I found that although my background worker thread terminates, it is never freed, and I have a code leak.  This tool finds the problem and reports the source line. Additionally it also found some API failures that I may or may not have been aware of, and Win32 resources (thread handle) that was leaked.  This is awesome.



I'm going to wrap up now. I hope that all the above impresses you, because it sure impresses me.

Before I wrap up, I'll briefly compare this option to your only other real option for this kind of tool.   SmartBear's AQTime suite can do many of the same things that Nexus Quality Suite can do, but Nexus Quality Suite can actually do lots of things that the AQTime suite can't.    AQTime is more expensive, at $599 with a very restrictive named-single-user license, and a nasty activation and intrusive anti-piracy copy protection system that I do not very much like, because it won't let me run with a single user license inside a VM.   The copy protection actually runs a background Windows service, which detects all kinds of things like virtual machine use, and it disallows program operation inside a VM.  And the IDE integration of AQTime just crashed on me the last couple times I used it. I reported these crashes, and over several releases, the crashes never got fixed.   Sayonara, AQTime.

So what's the price for NQS?  At the promotional sale price of  $226 USD ($300 AUD), and with no intrusive copy protection that treats me as a thief, I have no problems recommending EVERY Delphi Developer and delphi using company buy this suite. There are lots of tools, and they work really well. If I had to complain about something it's that the documentation needs some further work, but they are working on that.  The product works, and when I find a problem or have a question, the technical support team is great.   The price is going up soon, so I recommend grabbing this while it's on sale.

I am planning to write some further review articles to cover this suite further, in particular I believe the automated GUI testing features in the NQS deserve their own separate review, and I think that there are many more profiling techniques that are possible to tease out very complex runtime problems in your system, not JUST to get the data to help make your program faster, or not leak memory, but also to understand complex behaviors by gathering runtime data that lets you see your program running.

In the past year, the amount of new stuff that has been delivered in the NQS is truly astounding. 64 bit support is new. I think this whole extra set of profiling tools is new.   I tested NQS on an extremely large application where I work, the product is over 5 million lines of Delphi code including all the in-house and third party component libraries, all the main forms and data modules, and other code.      In an earlier version of the tool, I was able to find a crash inside one of the NQS tools. I sent information to reproduce it to Nexus, and in the next release the product was fixed.    That's good customer service.


NQS is a tool that deserves a spot in your toolbelt too.

Full Disclosure: I received a complimentary review copy of this product, but my opinion above is 100% my own opinion, and I don't write good reviews for every product I receive a license for, in fact, quite the opposite, if I see something I dislike or I can't use, I'll say that. I'm a working coder, and I have no time for weak tools.   I have recommended that my boss buy multiple copies of this tool suite at work, where I believe it would be extremely useful.





Thursday, July 7, 2016

How to Hire the Right People? I have NO IDEA!

I have seen a lot of articles on the interwebs from frustrated job-seekers who say over and over that hiring is broken.

Where I work, I am interviewing candidates who have recently graduated from university, for a Junior Software Developer position with a focus in Web/JavaScript/HTML5.  Consequently, I have been thinking a lot about how we in the software industry interview and hire people.  Because I have been interviewing people and, I think, I have moved past the need to haze candidates.

 I was not subjected to hazing rituals when they hired me for my current gig. When I was hired, I did not write any written technical exam, the interview was verbal, but the company had one, which it would use when it felt there was some question of a candidate's abilities.  I did bring in some code running on a laptop that I could show that did some interesting stuff, and which was as close to "proving" I can code, as I could think of.   I think ideally, a personal project you have spent two or three weeks on, should be enough to demonstrate.  But there have to be alternatives, and I will get into those below.  If we're going to get rid of subjectivity, we need to replace it with something objective.

Hiring, like most management decisions, is in the end always going to be fairly subjective, and it's an area of subjective business decision making that I think is very widely done poorly, and I consider myself very poor at it but I believe I'm getting better at it.   I hope to improve by being both broader in my search for evidence, and more focused on objective hard-to-fake data.

The short version of this blog post works out to this:

I am in favor of two to four hour take-home coding exercises, and I am against two-week trial projects.  


Peppering Candidates with Random Technical Questions Is Not Working

I agree with the critics of our modern whiteboard and non-whiteboard technical hazing rituals.  

By treating all candidates the same, and asking the same barrage of questions, we hope to map a candidate's knowledge, and some are even going to claim that this approach is "rational" or "scientific" or "impartial". It's not. Because people are not bots, and technology is not as complex as you think it is, it's far more complex than you think it is.

Here's the problem with technical knowledge: It's not linear but rather factorial in complexity, like the Koch Curve, the closer you look, the more detail is generated, and there is actually no end to the complexity.  If you don't even know what I mean by that, watch this awesome talk by K Lars Lohn and then come back.   If that talk doesn't give you a reason why you should be going to technical conferences, I don't know what to do to convince you further.  There, now, I'm a thought leader.

Now back to interviews.  If an interviewer is sufficiently intelligent, I think the interviewer should start by determining from a resume and from any phone screens, the areas where the candidate expresses some interest, experience, and ability, and then talk as openly, and with as much good-will and personal charm, as is possible.  In recent weeks, I have watched people as their anxiety goes down, and I notice that what you can learn about someone who believes you are not a jerk, is much higher than someone who has their fence up.   This is a poker game where we lose if we keep our poker faces.  This interview game is a game where the best move is to fold, and show your cards.  This is what I'm looking for. I saw some of what I'm looking for in your resume. I see you mention here that you have tried Scrum and Kanban. What did you find worked and didn't work on your teams when you did those things? Let's talk about how teams work.  Let's talk about how compilers compile, how the JVM runs your code, how a statically typed language helps teams ship.  How a unit test can help you not break things, and is doubly important on a language like JavaScript where there is no compiler, and where consequently useful forms of static analysis may be impossible.  Let's talk about the recent trend towards languages which can be verified to be correct in some aspect, like D or Rust.  Let's talk about Functional programming.   With Junior programmers I'm interviewing very few have ever played with Rust or D, or F#, or Scala.  Very few can tell me about interrupt handling inside the Linux kernel, or about safe concurrency models for web-scale transaction processing, or about the differences between two transaction settings in MS SQL. 

So fine.  Let's find SOMETHING you love.  Animation? Awesome.  Games? Awesome.     Now we will dig into your own interests, and find out what you've done that we can see evidence of.

Don't I just sound so avant-garde? Trust me, I'm not.  I'm probably going to ask Juniors and Intermediates if a Stack is LIFO or FIFO.  Then I ask them whether walking into McDonald's and waiting in line to order a big mac, if that line of customers is a Stack, or a Queue.   This question might be a bit too easy in England where a line-up is actually called a Queue, but in Canada, I find that people who crammed the LIFO/FIFO part of it can't reason about it, and thus some conceptual wiring is missing in their heads, wiring that I can't quite account for.    My mental picture of a Stack is something you might remember from restaurants, if you like me, are of a certain age:


I ask about stacks and queues not because you need to know that every day when you work in my team, but because I have a distressing feeling that candidates can graduate by simply cramming and collaborating on coding projects, and can manage to retain very little of the knowledge-platform that their degree could have given them.    Which data structure would help me reverse the order of items in a list easily, a stack, or a queue?  The important thing about my question isn't if you could google it or not, it's how adept you are at thinking about systems built of large amounts of software and hardware. 

I believe that a working model of a smaller domain contributes to, and correlates well with the reasoning skill you possess in the large domain.   The human brain, confronted with systems composed of parts it does not understand tends to ascribe to others the agency for fixing and changing those systems.  When a engineer who knows how a system works understands the fundamentals, she will, I hope, be able to begin picking complex problems apart, a process I call bisecting, until she can find individual smaller problems which can be solved.  It is these bisectors of complexity that I search for when I interview.  I am looking for the developer who doesn't even know how to do this, but who believes she can do it, and who will keep trying until she does it.  Possessed of reasoning skills, and a strong set of engineering fundamentals, she is apt to succeed.

Even candidates who absorbed everything their school offered them will still need a lot of additional skills and need to learn a lot of tools.   But if you are not a learner, a sponge for knowledge in university, an organizer of systems and ideas, a bisector of problems, what rational evidence do I have that things will be different in your work life?  If you can't tell me how to troubleshoot your mom's internet connection, I'm not going to believe you can understand a Healthcare Information Systems environment.  

I recently interviewed a candidate with a Masters Degree in Computer Engineering, that I hope was simply having trouble because English was a second language.   Several days after the interview, I am wondering if I simply made the candidate so anxious and flustered, that I actually caused the interview's dismal result. Whether or not that happened in that case, it's critical that interviewers turn our dreadful critical gaze upon ourselves, find sub-par elements of our practices, and fix them.

A good interviewer needs to set candidates at ease.  When I see candidates smiling and laughing, and joking in an interview, I am happy.  I know that I'm talking with the real person, and that we can figure out what will and will not work with this candidate within this team.

I am not going to stop asking semi-random factual questions, but I am going to give candidates fair notice. I happen to like the little thing on Reddit where people ask you to "ELI5". Explain it like I'm five.  When you know something cold, you can explain it to a five year old.   This is a new knowledge-sharing phenomenon that originates with millenials.  If you're 21 right now, I'm old enough to be your dad, and then some.  Unlike some people, I think the world is going to be fine, when the Millenials take over and we're all retired.   I'm cool.

So why do I ask what DNS and DHCP are, when you could google that, and when those seem more like questions for an IT/Network-admin than for a Developer role? The argument that you can google what you don't know falls down at the point where you don't google because you're facing unknown unknowns.   Design decision mistakes are a common after-effect of unknown unknowns.  I make design decision mistakes all the time. We all do.  We do not understand the domain in which we are engineering well enough, and we do not even know what it is that we do not know. This is the unknown unknown I speak of.  I am looking for engineers who are wary, meta-cognitive, who build themselves and others up.  So let's get to my hire/no-hire criteria, and see if you agree or disagree with them.

Cardinal "Hire" Qualities (with profuse thanks to Joel Spolsky)

I want to hire someone who is SMART and CURIOUS, who GUARDS the team that GETS THINGS DONE, and WHO IS NOT A JERK.  I have grouped and expanded things in a way that makes sense to me but I freely admit that I stole almost all of this from Joel Spolsky. Thanks, man.

SMART + CURIOUS:  I am looking for evidence that you are a passionate, intelligent geek who likes to write code.  You have a deep and dividing interest in some (but usually not all) areas of computers, software development, and technology.  If I ask you how a CPU's level one and level two cache works, and you don't know that, that's OK, as long as you can answer the question "tell me about something that you built recently on your own time that you didn't have to build", or "tell me about some language or operating system  or tool that you're experimenting with".   

GUARDS + GETS THINGS DONE:   You're not just a member of a team that shipped, but a member of teams that would not have shipped without you.   Your team didn't know about version control? You taught them.  Your team didn't know about continuous integration? You added it to their practices. Your team didn't understand the zen of decoupling or the zen of test? You taught it. You modeled the practices that made your team get stuff done.  When you saw things that were bullshit, that would sap the motivation of the team to GET THINGS DONE, you faced the boss and spoke up. You, my friend, are the guardian of the customer's happiness, the guardian of the product's marketplace success, and the keeper of the flame.  Sometimes being that guardian means NOT GETTING (the wrong) THINGS DONE especially if it means doing them "wrong" just so they can be done "fast".  Long term trends that slip under the radar and that are under-valued in agile/scrum teams, are things you like to bring up at retrospectives. 

NOT A JERK:  You defuse tense situations. You don't add gasoline to open flame.  You call people out privately, and you praise people publicly.  You absorb blame. You deflect praise.   You admit when you failed to do any of the above, and resolve to do better when you don't live up to your own internal high moral standards.   You believe you can be a great engineer while valuing different people who have different communication styles, cultures, languages, and you think that the team's differences can become sources of strength, and when difficulty and division is spreading, you find ways to unify the team and give it a focus, a technical engineering focus, with a strong shared ethical principle.  You are a curator of good company culture.

But let's be honest about the above. The above is the person I'm trying very hard to be.  I'm trying to hire people who are trying to do some of the things I try to do. 

My questions for you guys:

  • How Do you Find Out Real Stuff about Candidates when you are conducting an interview?
  • What do you want to know when you hire or when you are seeking a job?  
    • As a candidate, do you ask who you would report to?    What do you hope to learn?
    • How do you feel about the number of people in the room? Do you think its a better sign when you are interviewed by one person, or do you think it's better when you're interviewed by three or four people?
    • Are there any "shibboleth" questions you have as a candidate?  What do you want to find out with them, even if you don't want to state your question directly, what are you trying to figure out?  I don't have a specific question but if I see signs of aggression, arrogance, or naked exercise of rank or privilege, I quietly note it to myself, and decline further interactions with a company.  One thing you certainly can't fix in a company is the culture of its leaders.
  • When you are being interviewed, how should people approach you to find out the most accurate picture of your strengths and weaknesses?

I'd like to open the floor to a discussion now, let's keep it civil. Thanks.









Wednesday, June 1, 2016

Survey Results for the First Annual Delphi Code Monkey Survey



There were 373 respondents but the statistics shown here only reflect a portion of that, because SurveyMonkey wants $25 US from me to give me fully detailed results. Given that the final numbers are unlikely to be much different from these, I'm going to leave these as they are.  30% of respondents left the "other tools" question blank, which is interesting.














Some interesting responses from the "other" category:

* Some people were in professional categories other than the ones described, such as "retired".

* Other commonly stated worries included financial/pay level concerns, and concerns about whether they can afford to keep up with buying new versions of Delphi, or if they can compete with cheaper contractors, perhaps offshore.


I was pleased to see that although the experience meter tips towards "oldtimers", there are some inexperienced and only slightly experienced readers.  If anyone wants to fire me any comments on what beginner topics they would like to see me covering, please fire some comments up in the comment box.

The next time I run a survey it won't be on survey monkey, it will be my own homebrew PHP script survey.   Thanks everyone!


Saturday, May 21, 2016

Delphi Programmer Thinks about the Go Programming Language and Mandatory Source Code Organization

If you follow one of the usual tutorials for Go programming they will start by trying to dump a load of things you have to do on you.  This is perhaps something that you, as a long-time-Delphi geek have become inured to in your own environment.  Let us imagine a developer who has been given access to a source code repository or server, perhaps a big subversion server, and has no familiarity with the Delphi codebase at CompanyX, where CompanyX is basically every delphi-using company ever.  Let's make a quick list of the first tasks our developer would face:

  •  Setting up a working copy of source code so it builds, and so the forms open up without errors due to missing components.

  • Associating package set X required  to build version Y of product Z.

  • Setting up library paths that might be completely undocumented.

  • Individual  things done at company X like mapping a fake drive letter with SUBST, or setting up environment variable COMPANYX to point at some shared folder location.

  • At some companies they will just look at you blankly if you ask "can you build this entire product and every part of it from the command line on every developer's PC"?  Other companies have exactly ONE computer (MysticalUnicorn.yourcompany.com) on which this feat is frequently possible.    Still others (the sane ones) have made the process so unspectacular, and merely reliable that they think the ones who gave you the blank look just haven't realized how insane they are yet.
  • At some companies it might be considered acceptable if the build scripts and projects and sources ASSUME you will always check your code out to C:\COMPANYX. When you want to have a second branch you simply clone and copy a tiny little 120 gigabyte VM and fire that up.

Has any of that ever seemed insane to you? It does to me.   And so when I look at new languages one of the things that I look for is if the problems above have been thought about and resolved in that language and its related tools, including its build system, if it has one, and its module system.

Go has been known to me for some years as a famously opinionated language, characterized by the removal of features that its designers felt were problematic in C++, and were thus removed from GO:

  • There are no exceptions in Go, only error returns, and panics.

  • There are no generics in Go.

  • There is no classic object oriented  programming with Inheritance, there is only composition, and there are only Interfaces, there are no base classes (because no inheritance).

  • The module structure is pretty much mandatory.  Here's me starting a brand new Go project from a command line, what is happening should be pretty clear to most geeks:

~/work> export GOPATH=~/work
~/work> export PATH=$PATH:$GOPATH/bin
~/work> mkdir -p $GOPATH/src/github.com/wpostma/hello
~/work> cd $GOPATH/src/github.com/wpostma/hello
~/work/src/github.com/wpostma/hello> vi hello.go 



package main

import "fmt"

func main() {
        fmt.Printf("Hello, world.\n")
}

~/work/src/github.com/> go install github.com/wpostma/hello
~/work/src/github.com/> hello
Hello, world.
 
What is the thinking process that goes into designing the module system, with the following structure:





I think the above has the benefit of being about as nice a structure as I could imagine.  The folder names above tell me even where on Github I might find this project.   Source code is now globally unique and mapped by these conventions so that I know where to find any public code I want.  If I want to use gitlab.com or bitbucket, or if it was stored on a private server inside my company named gitlab.mycompany.com, I would move my code into different folders to make that choice clear.  For a language which is intended to be used in large systems, it's an appropriate design choice.  Let's contrast this with Perl or Python where the intended use starts with one to ten line scripts that are basically used for any kind of little task you can imagine, and where this kind of ceremony would be stupid.

I have worked in enough large code-bases in Delphi, and where any form of organization is accepted, it will inevitably become horribly complicated.

Let's briefly discuss the forms of folder/directory organization one might try to do in delphi:

* Ball Of Mud: Everything for one application is in one or some small number of directories - It is extremely common that one directory contains 90% of the files that are not third party components and that directories are only used to hold files not written here.  No sensible use of directories is made.  The source should all be in one directory with 10K files in it.  Usually ball of mud folder structure goes nicely with ball of mud source code organization inside your code files. That form with 5K lines of untestable business logic mashed into the form? That "controller" object that is more of a "god" object that direct references everything else tightly via USES statements? Ball of mud.

* MVC or MVVM: Views in their own folder. Models in their own folder. Controllers in their own folder.   Additional folders for shared areas, and per application area.  I've heard that this is possible, but I've never seen a Delphi codebase coded according to MVC.  Ideally, if you're going to do this, you also don't have your Views reference your controllers or even have access to the source folder where the controllers live.  Your models ideally don't reference anything, they're pure model classes and don't take dependencies on anything.

* Aspirational:    This is the most common condition of Delphi codebases.  There is some desire to be organized and some effort has been made, but it is fighting an uphill battle that may be unwinnable, because the barn door was already opened, and that cow of accidental complexity is already out and munching happily in your oat field.    You have a desired modular approach and it's expressed in your code, but every unit USES 100 other units, and your dependency graph looks like something a cat coughed up.

So given that I have seen large systems get like the above, I have a lot of sympathy for languages like C# where at least you can get your IDE and tools to complain when you break the rules, and I even have more sympathy for Java where namespaces are required on classes, and where the classes must live in directories which are named and hierarchically ordered.   In Go we have named modules, and the modules contain functions and can define interfaces, but they're not really Classes like in Java.  But the idea of order and organization has been preserved as important.  In Delphi we have the ability to use unit prefixes which are weaker than true Namespaces but still potentially useful.  In Delphi but most code I have seen does not attempt to adopt them.  It seems to me that having a codebase that uses unit prefixes, and that has source organized into these folders is a worthy future goal for a Clean Delphi Codebase, but existing legacy codebases are all we have, and so getting there is not something I'm going to hold my breath on.   One has to have practical achievable plans, and not tilt at windmills.

My first reaction to Go's requirement to use a fixed structure was predictably the same reaction that I had when I first realized the horrors of Forced Organization that java was forcing on me when I first tried Java in 1996.  Now, it's 20 years later, and I think we can say that Java's designers were right.   Java has proven to be especially useful in constructing extremely large systems.

The go package dependency fetching system (go get X) works precisely because of this forced organization, and it's all very well thought out.  There's NO reason that a clever Delphi programmer couldn't learn the lessons of how GOPATH and how go get and go install work and use that to fashion a guaranteed well organized and maintainable and clean delphi codebase, incrementally, by a phased approach.

You don't gain much if you close the barn door after the cow's got out, and you can't stop everything and rewrite, but if you can build some tools to help you tame accidental complexity gradually, you can restore order, over time, while you work on a system.

What goals might you start with?  I'm not going to tell you.    All I'm going to do is say that if your brain lives in a box that has a begin at the beginning, and an end at the end, and you can't read and think outside that box, you're sadly limiting yourself as a developer.  Becoming a better developer (in any language) requires what the old timers even older than me called a "Systems  Approach".  A view of what you build, and your project and its goals that is larger than your daily work, longer in scope than whatever form of agile "sprints" you're doing, and which has a sustainable high quality engineering methodology behind it.

You can't build that kind of mentality in at a language level (in Go or Java or Pascal), but I think it does help to have the bar set progressively higher as you can, so that once code becomes cleaner and more maintainable, there is at least the potential to detect when someone has made things worse.

Thus far we have seen many programmers throw up their hands at the 5 million line big-ball-of-mud projects and consider rewriting it from the beginning.  My feeling is that the bad patterns in your brain are still there, and if you rewrite it all in the same language or a different one, you're going to make all those mistakes again, and some new ones, unless you start learning some ways to approach system design that promotes clean decoupled programming.   Studying and research phases are required. Do not race to reimplement anything, either in Pascal or any other language.  Spend time and sharpen your sword. And remember Don Quixote.















Saturday, May 14, 2016

Completely Anonymous Delphi Code Monkey Reader Survey 2016

(update: Poll closed! Link removed)

I have put a completely optional, completely anonymous survey on survey monkey. The questions are completely general and will help me to get an idea of the readers who visit and thus, what you might enjoy hearing about, and may of course be fun for you to see the answers to as well.

I will share all results here on the blog after the survey closes, at the end of this month.
Here is a preview of all six questions on the survey. Please do not enter any email addresses or personal information into the comment boxes, where "other" answer boxes exist, so I don't have to spend time deleting it all before sharing the results here on the blog. The poll is completely optional and the categories/answers are fairly general.vague, and everybody will see the total answers.  If any answers to question 5 and 6 are repeated frequently, I'll publish those as well, otherwise, the published results will only specify the number of people who answered "other" to 5 and 6.  I really don't know what all the possible "#1 worries" might be, and I have a feeling that you all might enjoy seeing these answers. Please be as general and brief as possible, you only get 40 characters.