What if we just reused code?

The software world is full of ways to avoid duplication of effort. We automate many tasks for our users, for each other, or for ourselves. Among ourselves, one solution we focus heavily upon is code reuse.

We have invented many ways to reuse code, but in all but the simplest cases we do so by bundling it up. We tend to put it into a library, or occasionally into a programming language. This solution falls short in particular for cross-cutting concerns, and to solve those we often resort to frameworks. But those bring their own set of limitations, some at the beginning of the project, while others become apparent in long-lived projects.

In recent years, other attempts at solving this have arisen, such as substituting libraries and boilerplate for frameworks, and then creating tools to generate the boilerplate. These generators can be great to get you into the current recommended state for your chosen libraries. They are especially helpful for starting new projects, but once started they are rarely helpful again. Use them once, and then you’re on your own.

From Onboarding to Production

Libraries have their own set of problems. To gain users, support, and contributors, you have to be welcoming. And to be welcoming, the library is left in a permissive, sometimes chatty state. So that you feel like you are doing well. So that you don’t get stuck and give up.

The problem is that there is barely a library around (including security libraries) that doesn’t take a ton of work to prepare it for production environments. And for some strange reason, it is often the case that third parties – rather than the project maintainers – are the experts on how to accomplish this. And often enough, one third party is not enough. You have to combine advice from several third parties as well as the maintainers to achieve your goal.

And just when you think you’re done, here come the operations folks. They inform you that you’ve missed a whole bunch of other things and please go read up about CORS headers, correlation IDs, log aggregation, graphite, and… hey! Why aren’t you writing this down?

And every single team all over the world that is using the same set of libraries has to do the exact same thing.

Once More, with Feeling!

Not long after you get all of this working, you’re expected to do it all again, because monoliths are passé. So now you have to go figure out if you wrote all the steps down properly, or if you can just cut and paste code from the old project, and after a couple rounds of this they start to expect you to get it right on the first try. No pressure.

A Way Out?

Something that’s changed recently is distributed version control. Linux developers share their entire source code history across multiple locations, and multiple teams. Large companies use monorepos to share source between divisions or the entire enterprise.

Forking of repositories is another tool, commonly used in open source. On GitHub there are two reasons to fork a project. The most common is in order to have write access to the code so you can make changes and share them back with the project. Less common but almost as important is forking a project to _stop_ sharing code. Through abandonment or a difference of opinion, you want to take the code in a some other direction.

But what’s to stop you from taking a single project in several directions at once? Not that much. These days, it’s not uncommon for an organization to have a number of base Docker Images that have just the right mix of tools and versions to play nice in the data center. You’ll have a conservative image that has things almost everybody needs, a few others with things many people need, and then yours on top of it, with the stuff your team needs right this second (like the current version of your applications).

I know you can do a similar thing with git, at least on a small scale. I am sure, in fact, that many companies are doing this internally, just like people were refactoring before someone gave it a name, and another guy wrote a book about it. The question I’d like to know the answer to is how far can you take this? Which is why I’ve started working (again) on Hello, Enterprise World! over on GitHub.


A special thanks goes out to Kenny Bastani, who said of himself in a presentation:

“I make highly scalable Hello World Apps for a Living”

At which moment I could express the problem I was trying to solve.


Posted in Tools | Tagged , , | Comments Off on What if we just reused code?

Sympathy for the Reader

There’s an old aphorism about writing good code:

Code as if the next guy to maintain your code is a homicidal maniac who knows where you live.

Very humorous, worth a good chuckle and maybe a knowing nod.  But is this a helpful model for motivating programmers?  I doubt it.

Writing good code won’t come from a call to action based on fear of the next guy.  It might come from a sense of empathy.  Today I want to lay out a more concrete version of the above quote that I think might be more likely to change your behavior.

Nobody Reads the Code Just Because

At the beginning of a project, it’s reasonable to expect that someone, or even everyone senior, know everything that’s going on in the code.  As features accumulate, as more hands become involved, as coordination becomes something you have to actively work for, this begins to fade.  Pretty soon it’s just the leads.  It’s not long before people become unfamiliar with bits of the code.   Now you start witnessing conversations where a lead says “This is how the code works,” and someone gets an uncomfortable look and says, “Well, no, that’s not how it works anymore…”.

People Read the Code Because There’s a Problem

The backlog is full and there are deadlines to make.  Nobody is spelunking in parts of the application code for their own amusement, and only rarely for their own edification.  Most of the time, if they’re in a bit of the code they are there for a reason: There’s a problem and they are hunting for it.  We don’t just have problems when we’re picking bugs off the backlog.  We have problems every time the code doesn’t do what we expected the first time we try something.  In fact it’s quite likely that most of your week is looking into problems.

Faced with a problem, the person may have only a rough idea of which part of the code is involved in the problem, so they may have hundreds or thousands of lines to go through.  They are searching for something that ‘looks funny’, because when something went wrong, it’s because something unexpected happened, so let’s look for unexpected behaviors in the code.

Debugging is a process of eliminating false positives until you arrive at the root cause. This is the biggest reason that Code Smells are a big deal, and not just some bit of aesthetic frippery or hokey religion.  Every bit of smelly code they look at becomes suspect.  They have to eliminate it as the cause.

If an important data flow is full of ‘clever’ code and old tech debt, it has the potential to ‘nerd jack’ every developer who comes near it, every time there’s a problem.  But as the author of this insult you probably won’t ever hear about it. By the time the developer hunts down the root cause, they’re so tired and so relieved the pain is over, that they just fix the problem, claim victory, and go off somewhere to recuperate.  In my case I will throw down a ‘git blame’ and watch out for patterns.  If I suddenly become bristly toward you for no reason you know of, this is probably why; your code has caused me (or someone I’ve been helping) enough grief that I find it difficult to trust you now.

Why Write Clean Code? As a Kindness

With this in mind, I’d like to offer a different guideline.

Code as if the next person who reads it is having a bad day.

If they aren’t having a bad day already, it might turn bad once they look at your code.

In another post I’ll go over some of the Code Smells that I believe contribute to this class of problem.





Posted in Development, Uncategorized | Comments Off on Sympathy for the Reader

Promises in Angular

Last year, I gave a small presentation on Promises in AngularJS, with an interactive element.  It’s a little bit dated, and a million people have now written about Promises, so I’ll leave the basics and the crash course to Kris Kowal, and jump right to the part I thought was interesting: my examples.  Very few people put their examples on JSFiddle for the audience to experiment with, to see what breaks – or doesn’t – when you make  alterations.

Working on my resume and portfolio recently, I stumbled across this presentation I did about 18 months ago.  I had completely forgotten about it or I would have done something with it much sooner.   It was an AngularJS project, and the template was set up by someone who was, like most of us, new to Angular.  Unfortunately he left us a legacy of some bad decisions.  One was an odd directory structure (which I couldn’t fault him for, since there are half a dozen options and nobody can agree on which one to use) but the worst one was that he left us with the Deferred Antipattern  and everyone new just copied it without a second thought.

And it’s not like I wasn’t making mistakes too.  Like a lot of people with a JQuery 1.8-ish background, I came with expectations, not all of which were correct.  The presentation was supposed to be about some anti-patterns and the things I’d learned, but in the process of prepping the examples, I discovered that two of my assumptions were incorrect.  With that new information I realized that a few of my unit tests were failing silently because of them.

JQuery Promises Match No Spec

With all of the new Single Page frameworks, and the Node community being its vibrant self, a lot of proposals have floated to provide a standard Promises implementation in Javascript.  It’s telling that not one of them had the same semantics (and certainly not the same naming convention) as JQuery.  JQuery has since changed to implement the A+ spec, but unfortunately that still means a lot of us have things to unlearn.

Where AngularJS and JQuery Agree

As in almost all Promises implementations, you can register callbacks for success, for failure, for both, and for errors.  You can compose promises, and you can manufacture your own promise to do asynchronous work (maybe processing or collating several REST responses).  You  can register a late callback, that is, a callback for a Promise that has already been fulfilled, and you can be assured that the handlers will be called in the order they were registered.

Where the Similarities End

In JQuery, a callback on a resolved promise is fired synchronously.  This means that on a late binding (eg, in the case of caching) any code after the line registering the callback will run after the handler has already been called.  This can create some subtle bugs, and in the case of composing promises can even result in race conditions that only show up under server load, or on a slower or faster machine (or network connection).

In Angular, all callbacks are asynchronous.  Even with a resolved promise, the callback will fire in the same order it would if you had to make a network request.  That whole class of bugs and the attendant unit tests evaporates (Note! this comes at a price of more boilerplate in the rest of your unit tests).

Angular promises can chain in a way that JQuery won’t, but this can confuse people who don’t expect it.  If the result of a network request conditionally requires a second network request to honor, you can resolve a Promise with another Promise, and the Angular internals will handle the callbacks.

The Gotcha here is that the Angular Promises API looks like a fluent interface, but in fact it is not.  Every time you call a function on a Promise, you get a new Promise.  While this allows for some pretty cool use patterns, and prevents you clobbering some things, it does have implications around what happens when two callbacks are registered.

What Doesn’t Work?

In Angular, one the best use of Promises is in Services that make REST calls, and wiring them up to resolve() in a view.  Within resolve, these work pretty well, but they won’t work in a watch, or in an ng-bind variable.  Instead you’ll have to register your own callback and assign the answer to a scope variable if you want the page to automatically update.

Further Reading

My source material came predominantly from these links, and a lot of time spent experimenting in JSFiddle:

All of my examples are on my JSFiddle:  http://jsfiddle.net/user/jdmarshall/fiddles/

Posted in Development | Tagged , | Comments Off on Promises in Angular

A Brief History of Jason

More importantly, I thought all of the smart people were developing software this way.

Chapter 1:  Starting at NCSA

My first job in software was in college, working support for NCSA Mosaic.  To this day it’s still one of the biggest breaks I’ve ever caught, and it set a tempo of my career for at least the next ten years.

My first two summers I went back home and worked at the best bicycle shop in town, doing assembly and repairs.  I learned a lot about craftsmanship, good tools, and how to take care of your workspace, though it took a few more years for it to sink in fully.  But by junior year I felt like I needed to be more independent and took a year sublet on campus with some friends of friends.  I was a young man in a terrible hurry, and I felt that Junior Year was the time to start a job as a student programmer, so I went looking.

Things were looking pretty bleak.  My first interview was terrible.  I loved concurrency and networking and there was a research project trying to build concurrent extensions to C++.  I had almost no experience and was hoping that enthusiasm would suffice, but when the day came I blew it.  I slouched.  I wore shorts (It WAS summer in Illinois, and they were my nicest shorts, but they were still shorts), and I’m pretty sure they were hoping for a grad student.

So it was about 5 weeks into the summer, and I had no full time job.  Due to seniority and my inability to be anywhere by 8 am, I only had one shift a week working in the computer centers, plus whatever I could pick up covering absences.  I was living off of savings – savings that was supposed to be tuition money – and subsisting predominantly on vermicelli noodles, bulk olive oil, rosemary and parmesan from Sam’s Club.  My father was understandably concerned.  He started saying things that sounded a bit like ultimatums.

I started showing up at the Student Employment office, and one day someone had posted a laser printed sheet of paper, on the job board, eye level, right in front of the elevators.  The opening was for a student, doing support for NCSA Mosaic for Windows.  I only vaguely knew what it was (not for lack of trying by my friends).  It wasn’t a programming job, but I knew it was a big deal and thought I could hack it.   I also thought if I could get my foot in the door I could become a programmer.  I knew Windows tech support, and email, and I could write, so I came in and did my best tap dancing and got the job.  If memory serves Terry had me start either that day or the following morning.

Now if you’re keeping score at home, this is only a few months after Marc Andreessen famously convinced (some would say poached) a group of NCSA employees to quit their jobs and move across the country to work with him on a commercial web browser.  There were a lot of new faces working on the project, and the user base was still growing.  Rapidly.  My friend Ryan Grant had transferred in from another project and had been a developer for a couple months.  For fun I didn’t tell him I got the job, I just showed up for work and loved the double-take when he finally noticed me sitting around the corner.

I learned a bunch really fast.  For one, I had to be able to hand-parse HTML like nobody’s business, because most questions were about why pages weren’t rendering (sometimes it was our bug, mostly it was bad HTML). For another, I learned I had one of the highest paid student jobs on campus. Thanks in part to Marc, his compatriots, and an effort by NCSA to keep other people from following their lead, I was making $.50 an hour more than you on day one, unless your job was legitimately dangerous. And it only went up from there. I had my tuition money and a bit of spare time to boot, because I didn’t have to work 40+ hours a week if I didn’t want to. Among other things, I coded as a hobby.

My boss had hired me because he needed to spend more time doing managerial things and less time answering email.  They were behind on email, and it only got worse as time went on.  Almost immediately I discovered that I do not enjoy solving the same problem twice.  When I noticed that a lot of people were asking roughly the same questions, I put extra effort into a response and then saved a copy off to a directory.  Because we were all using the same email account, this turned into a library of canned responses pretty quickly.

Every week I would suggest something new.  Pre-screening.  Categorizing messages.  A vacation message with links to self-help pages. Looking for replies from people saying they’d solved their own problems.  Bankruptcy (Starting with the latest, and throwing out all messages older than a certain time).  Participating on Usenet so that we could troubleshoot problems faster, and so others could read the archive.  Terry hired a couple more students and left me in charge of them.  We would get close but never actually catch up.

It turned into a Red Queen problem.  Every time we brainstormed a new way to answer email questions faster, it would get cancelled out by the larger and larger user base.  Eventually this lead to the first piece of production software I wrote.  Mosaic had its own email client, but if you didn’t follow directions it was easy to send us email with an invalid or nonexistent return address.  I wrote a csh script that would look at our bounce folder for permanent failure messages and create a black list of email addresses that we would filter into a separate mailbox and scan through it once a day (I was always worried that a customer we couldn’t reply to would identify a really critical bug). From all modes of failure, almost 10% of our replies would bounce, and so I cut our workload by 10% with half a week’s work.

At some point, fairly early on in this process, it occurred to me that the only way we could stem the flow of email is if the ‘support’ team got into the business of preventing emails in the first place.  Many of our users had the same problems, and a lot of them could have been detected before we ever shipped software. I convinced Terry that we really were QA in addition to Support, and that we should put more energy into the Assurance part of that job.  So we started signing off on builds, then I started showing up to design and planning meetings.

I found a deep respect for the testing process that most developers have to learn slowly (and some not at all). Thinking of all the ways a piece of code could misfire is really hard, but essential to making a quality piece of code. And it’s better to find a problem early than in production, or after you’ve switched to something else.

Then Netscape version 1.0 was released, and everything went to 11.  For me, this was the First Browser War.  Marc A. and Eric B. might argue (and if memory serves, Eric does) that Mosaic unseating Cello was the first Browser War, but that happened before people started calling them Browser Wars.  The Spyglass founder tells people they were involved, but they were just a blip on our radar  until much later.  Most people when asked will think it as IE vs Netscape, but if anything that was the Third War.

Both teams were releasing new versions every 4-8 weeks.  They would release a feature, and if we liked it we’d copy it.  If we had a feature they liked, they would copy it.  Because the pace was so breakneck, it got to the point where people on Usenet couldn’t keep straight who was borrowing from whom.  We got credit for features they introduced, and they got a lot of credit for things we did.  We were mining our email list, Usenet, and the competitor’s applications for feature requests.

I joke that I introduced the stop button to browsers, but it’s sort of true.  The animated Mosaic Logo would stop a page load, but nobody understood you could click on it.  We got so many questions asking how to stop a page load that I realized we’d failed on UX.  So I suggested we add a second button with a big stop sign on it to make it easier.  Netscape liked the idea so much they also put in the stop button, and then changed their logo to go to the Netscape home page instead.  We stubbornly kept the old behavior (In fact we doubled down and Carl added a much fancier animation of the logo).

We were a team about a third of the size of Netscape, working for a University, and almost – almost – keeping up with Netscape. Because we were free, supported by licensing fees and grants from the National Science Foundation, they had decided to be free as well. For better or worse, I believe that decision as much as anything set a tone for the Dot Com Bubble.

Things were moving fast and we couldn’t keep track of everything.  There was a white board in the hall on the outside of Terry’s office.  It was right at the ‘L’ between the room the devs sat in and the rest of us.  Effectively, if you left your desk you had to walk past this board. You saw it all day every day. We started writing up all the tasks and bugs. If it changed you’d eventually notice even if someone forgot to tell you. Developers would put their initials next to the one they were working on. When they were done they’d draw a box next to it.  When we had tested it we’d put a checkmark or X in the box and it was good to ship.

Essentially, this became our Kanban Board, of a sort. Of course nobody I knew had any clue what Kanban was then.  We had never heard of Toyota Quality System, and David Anderson, who I would actually interview years later to be my new boss, was still off in Asia Pacific working with Jeff de Luca and trying to dream up TQS for programmers.

We were doing an Agile Development project in 1995 because I thought it sounded like a good idea and was scarily good at asking questions that lead people to interesting places. More importantly, I thought all of the smart people were developing software this way. It never occurred to me that we were onto anything special. I suffered for it, greatly, and at length. Two jobs afterward, I ran into another person who totally understood what I valued. For the next ten years I went around looking for people who didn’t think I was crazy, and in that time I only found two other people I was prepared to call ‘mentor’.

Eventually all good things must end. I had some health problems in the following fall, and did a slow fade for a couple months. I never got to thank Terry for not firing me. I was supposed to be working half time during school, but some weeks I was only doing 10-15 hours or working at very odd times, like 9pm to 2am, and I was too proud to talk to him about it. He knew something was wrong, and he gave me time to sort it out. But external factors soon eclipsed personal issues.

For one, the National Science Foundation was having trouble justifying supplying grants to compete with a commercial venture. We had to talk very fast a few times and pitch new avenues of development. Things got worse when Netscape began working on addons, Javascript become a thing, and we didn’t have anyone ready to tackle that sort of time investment. We had given sub-licensing to Spyglass, Microsoft had bought a copy, and IE 1.0 was on its way. Everyone on the core team behaved as if they believed it wasn’t over, but a lot of the support and management was having their doubts. Lots of meetings were held about The Next Mosaic for NCSA. People kept encouraging me to become a student developer (one kid had already jumped the queue on me), I had started doing perf analysis and finding allocation problems in the code, but I just wasn’t sure there was much future left in it.

But despite that, the Mosaic experience put a fire in my belly about The Next Big Thing, just as the whole World Wide Web experience put the phrase The Next Big Thing in people’s heads.

Just a few months into 1994, my then-girlfriend sat next to me while I finished homework in the computer lab, and had for weeks been working on her ‘home page’, whatever the hell that was.  My friend Ryan, upon transferring into the Mosaic group, had tried to show it to me shortly after they added image support.  He seemed pretty disappointed by my “is that it?” response. After I started working there I was a little embarrassed by the whole thing, and I hate being embarrassed.

At the end of my time on the project I had been teaching myself Windows MFC and multithreading by writing an Othello game that kept calculating its next move while you took your turn. The API was so frustrating and I was beginning to despair of what my future was going to look like. So much that I dreamed out loud a few times about finding some people and writing a cross platform UI that didn’t suck. Someone overheard me and asked if I’d heard of this brand new language called Java. Oh and by the way, there was a student programmer opening upstairs if I was interested in getting paid to learn it.

I applied. I didn’t think my interview went very well but I got the job anyway.

Posted in Development, Life | Tagged , | Comments Off on A Brief History of Jason

No Time but Now

Yesterday my dog turned 3 1/2 years old. This morning I took him for what is probably his last walk.  He’s slipping away and there may be none tomorrow.

For three years our Saturday walks were a cornerstone of my physical and mental fitness routine. Every Saturday morning he would bug me to get out of bed and take him on our two mile walk. When he thought he had figured out the concept of a weekend it became every day when I was home and the sun was out. He is a tremendous creature of habit and the family joke is that he’s my dog on the weekends. We would walk somewhere to a cafe, eat cookies and meet new people.  People who instantly fell in love with him.

Somehow we convinced this dog that the world is exactly how we wish it could be.  That everyone you see is just a friend you haven’t met yet, that people and other dogs are basically good and if you give them half a chance they will open up like a flower. That the worst thing anyone can do to you is just ignore you.

A lot of my personal growth over the last few years has been in trying to be present in the moment. Coming to grips with the notion that all we have is Now, and that in most situations it’s the most important thing to worry about. There is nothing like an easy going pet to remind you of this fact, and I got a sometimes not so gentle reminder that whatever I thought I was doing couldn’t be nearly as important as going outside, right now, with my dog.

We had plans, he and I. At three you have a clear notion of what your pet is capable of, and what they enjoy. Now that I’m in better shape, we were going to go hiking this summer. I was going to finally teach him to swim, even if I froze to death in the process. Now that’s all gone. What’s left is the things we did accomplish together, and the time we are spending with him now. And that’s okay.

We are of an age where loss becomes more common. In just the last two years our family has lost three pets. My two cats passed from old age, and most recently our dog’s older brother died of a rare reaction to a medication. In all of those cases there were feelings of guilt, where due to circumstance they did not always have the life we wished for them. I have never felt that way with him. Everything he ever needed he got. Except for a long life.

This is all just a long winded way to get back to the subject matter, which is this: You don’t know what time you have. If there is something you should be doing, do it. Or at least do something. What can you do today that might get you there? Do it. One day at a time is really all you have. Please use it.


Best dog

Good night, sweet prince.

Posted in Life | Tagged , | Comments Off on No Time but Now

The Cobbler’s Children Have No Shoes

cobblerI don’t hear this phrase in professional settings very often, unless it is coming out of my own mouth, but up until recently it was true that the gulf between the software we were expected to create and the tools we had available to create them seemed insurmountable.  Only in the last five years has this changed very much, and there’s still a lot we’re left wanting for, like the Cobbler’s proverbial children.

If you’re surrounded by ugly software all day, with little or no UX considerations, and told that you’re not a Real Programmer if that’s not appealing to you, that you should just suck it up and white knuckle it like everybody else, it takes tremendous reserves and resolve to rise above that.  I think for a rare few of us it’s galvanizing, but for many of the rest it seems to grind them down.  Death by a thousand cuts.  I see people all the time who merely aspire to create things that are adequate to the task.  In fairness they are simply emulating the world around them, and I’m sure that explains why they sometimes get uncomfortable or defensive when it’s apparent that someone expects more from them.

I suppose it shouldn’t really be that surprising to me that our tools lag behind our abilities.  In the physical world, given a good enough tool a proper craftsman is expected to use it to make something even more amazing than the tool itself.  Otherwise how would you have made the tool in the first place?

But at the same time I think these sorts of thoughts, I also know full well that I personally own tools that are often far prettier than anything I actually make with them.  I reconcile this by acknowledging that the things I do are a hobby.  That I am in fact an amateur when it comes to food prep, carpentry, landscaping, computer repair.

However, the first job I was ever proud of was as a bike mechanic.  I still have a drawer full of obsolete 20 year old Park wrenches in a tool chest in the back of my closet.  When people compare physical tools with software tools we often think of it as an analogy, but I have seen how the other half lives and I don’t consider it an analogy at all.  All the rules that apply to a high carbon steel wrench with surfaces ground to tight tolerances have corollaries in the software world.

A poorly thought out tool can create an expensive mess that the craftsman bears the responsibility to clean up.  “A good craftsman doesn’t blame the tool.” We have all heard that a million times.  But we tend to omit the important part:  the craftsman will seek out a new, better tool and chuck the bad one in the trash without a second thought.  That is what it means not to blame the tool.  A tool has no judgement, I do.  It means it’s my fault for keeping the blasted thing around when I knew the damage it was capable of.

Every craftsman I’ve known about has at least one tool they’ve made for themselves.  In some cases it’s something simple, like modifying an existing tool, but in many cases they will pour some time and care into making it, so that they can use it for years to come.  If you’re really nice they might make you one, or at least teach you how to make it.  I’m starting to see that sort of behavior more reliably in the software world.  I don’t know why it’s happening now.  I have some theories, but they’re only theories.  I am certain though that I’m grateful it’s happening and hope it continues forever.

There is still so much more to do.

Posted in Uncategorized | Comments Off on The Cobbler’s Children Have No Shoes

It’s Time We Rethought ‘Responsive’

We know that these days a lot of our users are trying to access our websites from their phones.  They may need some information, or they may just be bored and stuck somewhere that they don’t have a larger screen. We also know that a lot of the rest of our users are on tablets, or laptops.  We also know that for some of us, how our designs look on projectors matters (which are limited in contrast in addition to real estate). We’ve been designing our little hearts out trying to make our websites “responsive”, and today a lot of the Web looks pretty good at 640 and 1024 pixels wide. It’s been a lot of work and everyone who has pulled it off deserves to be a little proud of themselves.  I say “a little proud” because we’re completely dropping the ball in other areas of responsiveness. What we don’t seem to know at all is that people also have desktop computers with really big displays.  Really, truly, mind-bogglingly big displays.  So here’s a pretty uncomfortable situation: my secondary monitor is 1920 pixels wide.  The one that isn’t good enough to be in front of my face anymore.  The hand-me-down.  Today you can get giant panels that are 4000 pixels or even 5000 pixels wide. Even if you use scaling that’s a 2000 -2500 pixel wide screen for high end users. I’m slumming at a mere 2500 pixels myself, with no scaling.  I have expensive hobbies and I thought The Nature Conservancy needed the money more than I needed a bigger monitor.  And besides, it wouldn’t fit on my desk anyway. Sadly, with very few exceptions, the websites I visit don’t seem to make any use at all of a screen wider than about 1000 pixels. A few appreciate a little elbow room and top out at about 1100 or 1280pixels, but most stop there.  Meanwhile, some don’t even get to 1000 pixels.  This is the Daring Fireball website: Screen Shot 2015-04-16 at 10.32.50 PM That’s what it looks like maximized on my screen. Let me be clear: I’m not picking on DF. Their layout is what inspired me to finally commit some ideas to paper, but this has been bugging me for a while.   The project tool I worked on back in ’12 looked just terrible above 1400 pixels, and we couldn’t really come up with anything to do with that space.  And that was knowing  at the time that Nielsen was suggesting you tune your site for 1440 pixels. And of course in all fairness, if you look at my blog on my monitor, it looks like this, which I confess is only marginally better than DF: Screen Shot 2015-04-16 at 10.44.47 PM The management and editor views make a little more use of the space, but only a little: Screen Shot 2015-04-16 at 11.05.01 PM What happened?  We were in a Dark Age of 1000 pixel wide site design when Mobile First and Responsive Design became common practice, and here we are several years later and the entire web is 1000 pixels wide – even parts that worked at 1200+ pixels previously.  Google has gotten narrower than ever.  I was a little shocked when I actually bothered to check.  Are you sitting down? Even though it’s supposedly 980 pixels wide, the actual content only fills 512 pixels.  Google search results are five hundred and twelve pixels wide, today, in 2015. If these examples are not the very literal definition of regressing, I don’t know what is. I can fit two websites side by side on my desktop without losing anything from my web browsing experience.  In fact there are plugins for Firefox and Chrome that let you show two tabs side by side in a single window.  I can do this thing, but with the exception of some chat windows at work, I really shouldn’t have to do this. Certainly we can do better than this, can’t we?  At least, I think we can.

Posted in UX | Tagged | Comments Off on It’s Time We Rethought ‘Responsive’

QA and Dev: One Team

This is part 4 of my Lean and Agile, in Reverse series

To get an idea of how this Agile QA process is working, let’s first contrast it to our old friend Waterfall.  This is not in order to bag on Waterfall per se.  I hope by making this comparison that I’ll illustrate for you just how much leverage you can get from preventative measures, like sending frequent, working and installable builds to the QA team.  I’ve used this technique inside several ‘Waterfall’ teams to great effect.

Starting on the Wrong Foot

In the average case of a waterfall project, there is a long anxious period every release where QA has ‘nothing to do’.  There’s no new code to test yet, or very little. In fact if you’re doing any substantial rework to support new features, the app may not run at all for several weeks – or months.   Ultimately it’s a big game of Hurry Up and Wait.  Any reasonably conscientious human being in this situation will create work for themselves if management can’t provide direction.

Some will attend Requirements meetings.  The right QA person in those meetings can save you months of rework at the end of a release cycle, so this is very helpful.  But Requirements can be a slow process,  it can’t be a full time job for everybody.  It’s lots of short meetings intentionally spread out for logistic and psychological reasons. Often your worst decision was the last one you made in a long meeting, when you’re exhausted and not thinking straight.

Some others will look at the toolchain, upgrade things, try new techniques or tools.  But here too there are limits.  There’s a lot of risk in completely swapping out your test automation from one release to the next, so you have to be conservative.  Chaos and uncertainly are major stress points for many people.

The bulk of QA effort during this awkward in-between phase will go toward looking for bugs in old code, because finding bugs is what the boss is paying them to do, isn’t it?  And of course they will find them, because the waterfall process puts tremendous pressure on the QA team to sign off on a release that they haven’t had an opportunity to test thoroughly.  It doesn’t start out that way, but once the schedule slips, all eyes are on them to greenlight a build once the developers declare Feature Complete.

So now that the previous release is out the door they can go back and test the things they wanted to test but couldn’t finish under such intense scrutiny. The problem for the Dev teams is that now you have a flood of bug reports coming in for things that they haven’t been thinking about, possibly for a long time.  Even to categorize and confirm the bug requires a lot of attention, potentially reinstalling older versions of the software and tools.

If the news is bad enough, this can be demoralizing just at a moment when the team is trying to build momentum.  The huge spike in open tickets makes the dev team look like they are not good at their job, when the truth is that it’s just that QA is being more productive.  This tension, added to any anxiety the QA team has about appearing unproductive, can foster a feeling of antagonism between Dev and QA, and a sense that there are two teams instead of One Team – the people trying to build and deliver a product. It’s easy for this to turn into a problem that management has to deal with.

The Beginning of the End

Now we fast forward a bit, a month or two later when Dev tries to deliver the first working feature to QA so they can start exercising their muscles.  QA may already be grumbling about delays and risk to the schedule, so it’s Dev’s turn to feel the scrutiny.  They may expend a great deal of energy pushing to get a working build to QA.  Odds are that the first release to QA won’t work.  Some dependency doesn’t update or is missing.  The documentation for the system configuration is missing some critical piece.  Dev thought they had explained some things about the workflow changes that didn’t quite get explained to everyone. And it’s typical for at least one QA member not to follow the new instructions religiously, resulting in some hands-on tech support and a bunch of high profile tickets being marked as Not a Bug or User Error.

So we have a burst of energy expenditure from the Dev team to build a binary, then a whole lot more when the binary doesn’t quite work right for everyone in QA.  This push to QA often lines up at the worst possible time, the mystical highly productive period of the release cycle, which some call the ‘Steep part of the S-curve’.  Just when the developers are firing on all cylinders, now come the interruptions.  It can be very frustrating for them and this sometimes comes out in unhealthy interpersonal patterns.

The Way Out

By fast iterating with the QA team, a healthy rhythm gets established.  Much of the delivery lifecycle looks similar to the rest, allowing a sense of ‘routine’ to settle in. People can more easily predict when it’s appropriate to interrupt you, and generally have some capacity to ‘look out for each other’.  This allows space for a sense of camaraderie to develop, and cooperation leads to a happier, more productive and perhaps most importantly, a more resilient team.

Posted in Uncategorized | Comments Off on QA and Dev: One Team

See Also

During my absence, I did a bit of writing for my last employer, one is about running Jenkins entirely in Docker (including automated tests) and another one on Jasmine Matchers.

Running Jenkins headless is a pretty cool trick, especially for a young team, but I had a surprising amount to say about what goes into writing a good Matcher for a unit test framework. There are so many ways to get it almost right and cause your teammates a lot of grief and they try to figure out why their tests actually failed.

If you have a moment, check them out. They’re worth a read.

Posted in Development | Tagged , , | Comments Off on See Also

The Testing Lifecycle

A continuation my Lean and Agile, in Reverse series

We’ve now covered most of what I have to say about Production and how it benefits from fast iteration.  Next I’d normally talk about Staging, except that Staging servers can be expensive, and you’re more likely to own Staging servers if you believe in Continuous Delivery, which pretty much presupposes you believe in Agile. Which means I don’t need to sell you on any of this because you’ve already bought in.

Since Staging is the last phase of the QA process, we’ll talk about QA instead.

A Well Oiled QA Machine

QA knows what to test because the Dev Team knows what they’re working on. In fact because the cycle is so short, they can write it all down for the QA Manager. Since they know what they’re working on, they know what they’ve touched recently, which means they have useful opinions on what builds are worth testing, and what other functionality they might have hypothetically broken without meaning to.

Since the Developers have already run their own tests, QA almost never gets Dead On Arrival builds. The building and packaging has happened often enough that at least some of the developers are pretty comfortable with the process, and it only breaks when there’s been a recent process change, if at all. Since everybody knows the drill with build, package and (re-)deploy, the QA team is almost always testing the right build, so few false positives occur. And no DOA builds means less downtime for the QA team.

With a roadmap in hand, on the back of a suite of automated tests, and on the assurances and honor of a Dev Team that’s actually trustworthy, the QA Manager can allocate most of the available resources to testing the riskiest and most obvious things immediately, and assigns only a handful of resources to regression testing. Since the cost of regression testing grows very slowly between releases, the project can keep pumping out Quality builds for an extremely long time.

A short test cycle gets timely feedback into the hands of the Developers. The Developers get an opportunity to fix bugs before they’ve layered very much code onto a bad foundation, improving code quality and reducing turnaround time down the road (less Tech Debt). And because the focus is on recently changed code, the Developers have a reasonable expectation that new bug reports are related to the work at hand. This makes them more receptive of being preempted by a Late Breaking Bug Report, and reduces the calendar hours or days between first report and the fix. And since any bug found in testing has to be re-tested, the short testing cycle pays off twice.

With all this maturity in place, the QA Manager has a pretty good idea of whether the build is fit for Release. In fact the Development Team may actually abdicate responsibility for signing off on releases entirely to the QA team, since they have other things to worry about.

Posted in Development | Tagged , , | Comments Off on The Testing Lifecycle