More thoughts on "My skepticism towards current developer meta-productivity tools"

linkRe: My skepticism towards current developer meta-productivity tools

Using productivity metrics to measure individuals this way is akin to incident retrospects that identify human error as the root cause. It’s performative

I understand why Will is weary of putting a tool in manager's hands that allows them to know who is "falling behind on commit velocity." And it's even easier to understand his misgivings about developer stack ranking, a fear that has haunted programmers since at least Microsoft evaluations of the early aughts.


The intuitive wariness that Will and all developers feel toward stack ranking is easy to understand. It's never a popular political decision to create "winners and losers," and those on the latter side of that group will have bad feelings. They'll want to scrutinize the process that deemed them inadequate in search of unfairness.


Will and I share the dream that good measurement forms the bedrock of improvement and optimization. But how do you get the learning benefits of measurement without the drawbacks of stack ranking? There is an intrinsic tension between these poles. In my opinion, the design of the product goes a long way toward extracting the benefits and minimizing the downsides. "With clarity comes responsibility" seems to be the applicable.


If you want to blame someone then just go ahead and blame someone, don’t waste your time getting arbitrary metrics to support it.

I'm not sure exactly what this bit means. First, it has been rare that I've encountered a context where a manager was actively looking to assign blame. Most managers I've known (and been) spend their time trying to get the most out of their team, which isn't advanced by assigning blame. But let's assume that due to bad managers, or particular circumstances, a manager is in the market to assign blame. How should they optimally go about that?


When Will recommends to "just go ahead and blame someone," I don't think he is suggesting that blame should be assigned based on emotions or gut feelings. He presumably would want blame to be assigned proportionately to whatever we can uncover about "what went wrong?" When a big project gets behind schedule, what practical options do managers have to answer the question of "who to blame?"


That's a hard question to answer generically, because I've personally seen projects fail in a myriad of distinct and spectacular ways. But if I had to generalize, at a high level I'd start with something like "the project managers weren't able to successfully steer the developers to reach the business' targets." When you start peeling back all the layers that deserve blame for that predicament, there's usually no shortage to go around for the managers themselves. But since we're talking about how to get the most from the development team, how best to apportion the blame for that piece of the puzzle?


The question of "how to apportion the blame of the engineers?" is still too vague to be penetrable, so it helps to invert it. Who's not to blame? I would argue it's the developers who are doing stuff like:


Not adding tech debt. Writing tests, adding documentation, not duplicating past work

Working on the tickets assigned, or discussing why not. Design, product and executive teams work hard to figure out how what developers should be creating. If the developer disregards the objectives the other groups create, that should be public knowledge. There are sometimes good reasons not to work on the tickets assigned: reducing tech debt, addressing customer issues, etc., but there can be a fuzzy line between that and pet projects.

Getting a lot done. Operating within parameters #1 and #2, who is getting the most done, in whichever unit you prefer to measure?


That sort of foundation may or may not be what Will had in mind with "just go ahead and blame someone." If he would like to take a stab at his own construction of how to determine "who's not to blame?" it would certainly be an interesting list to those of us who think about this a lot.


It’s hard to write about engineering leadership in 2020 and not mention the research from Accelerate and DORA. They provide a data-driven perspective on how to increase developer productivity, which is a pretty magical thing. Why aren’t they being used more widely?

More details on these subjects

Google's DORA: Survey of performance given by Google's DORA: DORA DevOps Quick Check


link