RSS Feed

Interview Jeopardy

I’ll take obscure references for 500, Alex.

I’m of the age where I remember when an interview was actually that: what you had done, how you solve the problems, why you chose that solution over others.

Today, interviews have turned into the equivalent of gameshows with the interviewer attempting to pick unusual or even obscure questions to ask the candidate.

For example, I was interviewing for a iOS position and was asked about a particular class (it was designed to allow you to change the look of objects globally). I confessed I had never heard or used it, to which the interviewer said, “Yeah, I hadn’t heard of it until last year.”

So what exactly did he learn? Nothing. My non-usage of a class that even he hadn’t heard of merely pointed out that I had no need to make gross changes to the look-and-feel of iOS. Yes, we call that maintaining the User Experience, something that is very important.

The problem with iOS and OS X is that they have a lot of classes. Many of those classes have a lot of methods. There are many you will use a lot. There are some you may never use. This doesn’t mean that you don’t have the skills, what it means is that you might be one of those rare people who actually reads and uses documentation!

Interviewed By Those Ignorant on the Subject

On one interview I was asked the steps on creating a driver for Windows CE. I explained in detail omitting, by accident, allocating the buffer for the driver. Sadly, the person who was interviewing me only knew Windows and jumped on this with an “Aha!” and asked, “What would happen if you didn’t allocate the buffer?”

“Nothing,” I said, “you would get a black screen.”

“No,” he said obviously full of himself, “it would blue-screen.”

The problem is he was wrong. CE doesn’t always blue-screen. More often it black screens (i.e. nothing) and you end up playing 20 questions trying to figure out why. The problem is that the interviewer had no familiarity with Windows CE (a pretty spiffy RTOS that they slowed to a crawl by slapping much of the junk of Windows on top of it), didn’t know that it wasn’t Windows .

No surprise I didn’t get the job. Sadly, the person that was supposed to interview me was out the day. Also, sadly, this company boasted they hired, “The best of the best,” which means to say they didn’t hire people who knew more than they did.

Don’t get me wrong, hiring is very difficult. However, if someone has a degree and verifiable sources of employment then what is important, to me, is can they get along with the group.

I have had to take everything from timed internet tests where one had to solve puzzles (I loathe puzzles – software design is not puzzle solving), to answer obscure questions.

Rather than try to determine if I know something sans documentation (why memorize it if I can pull it up on the computer? Seems kind of dumb to me), ask me how I solved the decades of problems in C++. That seems far more realistic than silly questions. I not only have a degree, but more decades doing this than you have. Insulting me is not making you look good.

And that is what this really boils down to – it insults the people they are interviewing. Whether they believe it or not, what it says is “I do not trust what you put down on your resume, so prove to me, in 60 seconds, that you are as good as you are.”

So I asked the person who asked the first question, “Tell me what you are to learn from this answer?” He was taken aback. I was just supposed to do as he asked, not question the fact that there is no methodology here, merely random questions people made up in order to somehow judge the candidate.

This is, of course, where asking these random questions fail. They were asked to come up with something to quiz me with, with no guidelines as to what it meant. The candidate could have fifty patents, twenty books on C++ and fail all the questions. Does he get a pass because people know his name compared to the one that gets only 75% of them correct?

Failure Is An Option

Of the multiple interviews, I’ve only had a few that were truly interviews. The others were the Interview Jeopardy or in some cases, Interview Trivial Pursuits. They weren’t fun and all they told me was that this person was less qualified to conduct an interview than Alex Trebec.

Apparently not only is failure an option, these companies are willing to let valuable employees slip through their hands.

The question people should be asking, which they do not, is “Can this person learn our way of doing things?” Even more important, “Can this person continue to learn?” Knowledge is not a one-way street and because you have patterns that you have relied upon for five years doesn’t mean there are other, perhaps more important ones, that you ignored because the “Gang of Four” didn’t write them down.

This assumption of ignorance is currently killing our market. Everyone knows this. I have talked to recruiter after recruiter who has said it is embarrassing, has put off potential candidates and insulted others.

There is no shortcut in the interview process. And yet people keep trying.

When I am asked by friends who are looking, I do tell them about Interview Jeopardy and which companies I feel are the worst about it. Interview Jeopardy does a lot to harm the company brand and, worse, the company doesn’t seem to be aware of this. Why would I want to work for a company that is that self-unaware?

If the point of my interview is not about my skill set, then what is the point? (See? I phrased it as a question!)

Stupid Hash Functions

I’m getting very tired of reading about people implementing Hashable using the following (this is in Swift):

var hashValue : Int {
    get {
        return x ^ y   // x and y are Int
    }
}

Okay, first, let’s examine why this is wrong. The point of a hash function is to “create random data from the nonrandom data” (Knuth, Sorting and Searching) and since x ^ y is equal to y ^ x, it can hardly be considered a method of creating random data. Let us use some real world numbers (represented as hex)

  1. 0x2A ^ 0x40 = 0x6A
  2. 0x40 ^ 0x2A = 0x6A
  3. 0x00 ^ 0x6A = 0x6A
  4. 0x6A ^ 0x00 = 0x6A

A decent hash function? Hardly.

“Oh, but that is what Equitable is for” people have murmured. No, it’s because you slept during class about hashing. Hashing isn’t fun and yes it is hard to create a good hash function, but you didn’t even bother trying.

I don’t like to write hash functions either, but I have at least a basic fundamental understanding of the problems inherent in hash functions. It didn’t take me but a few seconds to come up with four test cases that produced exactly the same hash. Worse, x and y can be swapped and result in the same hash. Position should influence the result of a hash function.

But let us extend the problem to a buffer of four items, w, x, y, z, if you exclusive-or’ed them, you would get the same result had you done it z, y, x, w (go ahead, try it). The problem is that XOR is a communative operation exactly like addition. In fact, in hardware, XOR used to be called “half-adding” because the carry never influenced the next position.

  1. 0x2B ^ 0x5A ^ 0x11 = 0x60
  2. 0x2B ^ 0x11 ^ 0x5A = 0x60
  3. 0x11 ^ 0x5A ^ 0x2B = 0x60
  4. 0x11 ^ 0x2B ^ 0x5A = 0x60
  5. 0x5A ^ 0x11 ^ 0x2B = 0x60
  6. 0x5A ^ 0x2B ^ 0x11 = 0x60

As you can see, three bytes in different order produce exactly the same value. Exclusive or is obviously a poor way to create a hash value.

So this time, instead of pulling something out of our nether regions, let’s try something that makes a little more sense:

var hashValue : Int {
    get {
        return (x * 0x21) ^ y   // x and y are Int
    }
}

This is better. Why? Because we have now made sure that the values are based on their position. This becomes really important when you have a large buffer that needs to be hashed. So, using our previous values we have:

  1. (0x2A * 0x21) ^ 0x40 = 0x52A
  2. (0x40 * 0x21) ^ 0x2A = 0x86A
  3. (0x00 * 0x21) ^ 0x6A = 0x06A
  4. (0x6A * 0x21) ^ 0x00 = 0xDAA

This is a lot closer to creating random data from nonrandom data. Possibly the only minor irritant is the third case where anything with zero will result in the y value as the hash. We can correct this by using a simplified version of Daniel J. Bernstein’s hashing method:

var hashValue : Int {
    get {
        var hash = 5381        
        hash = (hash * 33) + x
        hash = (hash * 33) + y
        return hash
    }
}

Again, we assured that x and y are positional and that zero now affects the resultant hash.

I have hoped I have shown why hashing is not a simple science and why just exclusive-or’ing will result in a horrible hash.

Why I promised, NO REALLY, never to write a GUI API and why I’m breaking that promise

I have a love/hate relationship with Linux. I love many aspects about it. In the same way I hate many aspects about it.

Then came along Swift.

Swift is a gorgeous language. I think I could spend 20 years on learning it and still find surprises. It is like the tiny Christmas present that keeps giving and giving and giving.

The perfect lover.

Okay, maybe not that far, but if it was human it would be the perfect lover.

Many years ago I wrote a GUI API back when there were no GUI API’s. Now, of course, you can’t toss a rock without hitting one (sorry!). But here is the thing – they are all going to do it wrong. Not mildly wrong, but wrong wrong.

So, I’ve decided, along with a friend, that if there is going to be so much wrongness, we might as well toss in a little write… er, I mean, right.

We are building, “from the ground up”, an API for Linux. Which will also run on Mac OS and Windows (just so we can yell “trifecta!”… you don’t get to yell that often and it is fun word to say… come on, say it. SAY IT!)

We’re calling it Mobu which is the name of a dead, extinct bird. Instead of NSclass or UIclass it will be MMclass. Unless we decide on something else. I suggested emoticons, but my partner said that :)View would just annoy people.

I think it’s cute.

UPDATE: Partner did research and he says, “NO! Not Mobu. It will be Iken.”

Okay, then.

Swift on Linux

I was one of many who downloaded and ran Swift on Linux.

It’s spiffy.

But…. (you knew that was coming)

I like Swift. I’ve written what I call a “stupid parser” in Swift that performs quite admirably. But there isn’t an editor for Linux that has all the nice little bits that exist in Xcode.

That’s problem 1.

Then there is the fact that you have basically the language running in a command shell.

That’s problem 2.

I have no doubt that there are people working tirelessly (or tiredly) to get a GUI up and running that is Swift compatible. Unfortunately, those people were not Apple.

What Apple Did Correctly

If you have ever dealt with the GNUstep project (and I have on many occasions) you will discover that the applications it generates look like directories. That’s because they try to emulate what was done on NeXTSTEP – except it doesn’t quite work. Applications look like directories. Ick. The only way to make them look like applications is to run something that is (from Linux users view) nonstandard.

More ick.

Instead, Apple makes applications look like applications. Well, for the moment, they look like applications you run from the command line. That will change.

What to Expect in 2016

By the end of the year I expect there will be projects that allow you to build GUI programs. That will be spiffy.

There will be editors, probably even a playground.

What to Expect RIGHT NOW

You can build programs in Linux for x86 that run in the console. Yes, I ran a “Hello World”. Woo hoo.

And, of course, they did delegates correctly.

Microsoft has already said that they plan on supporting Swift in Windows (hopefully it won’t be a mangled version).

Swift is a nice, clean language. If you are used to C# or Objective C, then I would seriously suggest you look into it. I can easily see it surpassing both of them.

This could be the one language that rules them all.

 

Apple’s new language (oh, update, it’s new again), Swift

Apple has a new language called Swift.

Oh, it’s been updated, it’s new again and now all my old files don’t compile. (sigh)

One thing I really, really like about Swift is that it fixes a lot of problems that were inherent in Objective C. Don’t get me wrong, I’m a bare-metal kind of person. I really do not like interpreted languages. I can give you many reasons (speed, speed, speed), but primarily it would be speed.

Swift is the first language that I have not worried about speed. It feels like bare metal (even thought I know it isn’t – a parser that I rewrote in Swift ran HORRIBLY slow until I reworked it and make it more Apple-y).

It is a language in progress. Yes, that progress can be annoying. One day your code works, then an update arrives and now it is broken. Googling can get you answers that are completely wrong. Apple’s documentation at times is a wee bit too terse. No, what is the word that is more terse than terse? Oh, right, “missing”.

But it’s not a bad language.

So why are people actively hating on it? I suspect they have very little knowledge of other languages. For example, PERL (or PERIL) to me is a write-only language. I once was required to modify a small section. It took me ages (probably a week). But even though I KNEW what it did, it wasn’t obvious. Not because I used the secret handshake, there are at least five in PERL, but because the language is just so obtuse. This is why you see so few articles about bad PERL scripts – how would you know?

(Yes, some of you are good PERL writers, please do not inundate me with “PERL is great because…” – yes, I know the reasons, I just don’t agree)

Then we have the one language that I NEVER had a good understanding of: SNOBOL. SNOBOL is an incredibly powerful language. People have written compilers in several pages of code (I think I saw a C compiler written in five pages – sans code generator). However, it is so dense and hard to understand that it makes PERL look positively chatty.

Here is an introduction: http://drofmij.awardspace.com/snobol/

It starts off simple enough and looks procedural. Until you get to patterns. You have two parts that kinda-sorta work together. Here is a page laughingly titled “A Quick Look at SNOBOL”:

http://langexplr.blogspot.com/2007/12/quick-look-at-snobol.html

Okay, Hello World looks easy… what are you… holy guacamole what is THAT?!?! That would be the part where people who dream in regular expressions live.

We could delve into other languages like Lisp – a language designed for those who do not believe we have enough parenthesis in the world and who love counting them.

http://www.cs.sfu.ca/CourseCentral/310/pwfong/Lisp/1/tutorial1.html

It’s not that I dislike these languages, I actually like Lisp, it’s that when deciding what is a good or a bad language, many of these people compare Swift only to C, C++ or C#. They think that “modern” (who came up with that moniker? PR? Shoot them) languages are good languages.

What I consider a good language is one that allows me to get something done and allows me to go back three months and understand what the hell I did (I may operate in C, C++, C#, BASH, Javascript or others depending on the situation). Something that is powerful, but initially opaque, is not a good language. Something that is terse to the point I have to look things up (like BASH) is not a good language.

Swift has the possibility of being a very good language. It certainly understands the concept of delegates better than C# does (if you think C# “delegates” are delegates, you might want to read my previous post).

Since it appears that Apple is moving all of it’s development to Swift, I have also moved my development to Swift.

Now if they will only open-source it…  =^.^=

I Hate Patch Files

I do a lot of coding in everything BUT the latest and trendiest languages for my job. Typically it is C or C++ for either small (as in 16K of RAM) microprocessors or, if I’m lucky, Linux.

Today I want to talk a little bit about patch files. You either love patch files or you hate them. Frankly, I think a lot of people hate them, that profess to love them, but that is their issue, not mine.

If you never had to deal with a patch file, the concept is really simple. The program, patch, looks for lines in the source code that match then adds or removes lines between those lines. So if the lines were “abcdefg” and the patch file was “abCDEfg” (where CDE were lines to be deleted), the result would be “abfg”.

Conceptually simple, elegant and, for the most part, foolproof.

Except for one little tiny thing. It is based on the assumption that code rarely ever changes. This is, of course, a bad assumption. Most software products have a 5-10 year lifespan (except for Windows – a whole different topic). In other words if it is code you didn’t write, don’t have control of, there IS no guarantee that it won’t substantially change tomorrow. Or next week. Or the day before you have to make that “minor tweak that shouldn’t take but a day”.

Which is why I hate patch files.

But I need the modifications, which is why I use patch files.

This is a case of there is no “right way” or “elegant way” without reinventing the wheel and maintaining it yourself.

So what is the solution? Beyond forking the project and maintaining it yourself there are only a few options:

  1. Document the order the patches need to be applied
  2. Document what each patch does (do not rely on code as documentation – it isn’t and it doesn’t)
  3. Explain how you derived the patch file. That may seem obvious now, but in three years you are going to go “wow… I must have been seriously brilliant back then”
  4. DO NOT ASSUME other people’s code will remain constant. If third party code is mission critical, keep a copy of the old code as a back up. Older code is better than no code or code that is wrongly patched.
  5. Never, ever, assume that you can just “patch and go” and slap that on a timeline projection. If the third party code has changed significantly (it will), then you will be doing yourself a disservice.

Patches are a good way to maintain tweaks to software, but they are fraught with assumptions. The best method of assuring that you don’t end up in a “redevelopment cycle” is to maintain clear and concise documentation as to what each patch does and why it does so.

You Don’t Understand Delegation

Okay, for reasons best understood by myself, I attempted to create delegates under C++ (no templates because, frankly, they just make an ugly language look uglier). I managed to hack something that worked, but have grave concerns that it is incorrect.

I was right.

In his article The Gang Of Four Is Wrong And You Don’t Understand Delegation, Jim Gay goes through what we THINK delegation is and what delegation REALLY is.

And he is right.

I spend a lot of time in Objective C. I love Objective C. It makes OOP fun. In Objective C you can do true delegation. It isn’t a hack, it is part of the language.

C++ is a lot like Darth Vader. It is the dark side. You don’t program in C++, C++ programs you (as they say in Russia). C++ is the program of “no”. NO you can’t have delegates. NO you can’t do that (because I arbitrarily said so). The list of No’s goes on and on. Why have people flocked to C#? It is less “no’ish”.

Other than that, it’s not bad. Perhaps one day I’ll write a book titled “C++ – The Good Parts” on how to write readable, useable, and most important, fun C++.

Rather than summarize a summation of a long article, read Gay’s article. It won’t take long. You will learn something important.

Follow

Get every new post delivered to your Inbox.