Among many improvements happening behind the scenes at GitClear over past month, we've been laying the groundwork to collaborate with a professional researcher on a field experiment to study the real-world implications of Line Impact. To help familiarize researchers with Line Impact, we compiled a broad range of data points to illustrate how Line Impact accumulates from daily-to-yearly scales.

As a refresher, this introductory video depicts how Line Impact is calculated, by tracking code line evolution through the course of feature development. On the Line Impact Factors page, we describe some of the 15+ tactics that are used to strip the 95% noise imbued in "lines of code" data.

Here is a roundup of what Line Impact values "look like" over several orders of magnitude:

40: Daily Line Impact for a junior developer across all types of projects (startups, mid-sized, and enterprise companies) taken together. Corresponds to 5-10 meaningful lines of code changed per day.

55: Daily Line Impact for the median developer at a mid-sized company. To date, mid-sized companies (having 10-50 developers) are the slowest moving companies we measure. Though we expect this number will tick upward as we accumulate more data on such companies.

115: Daily Line Impact for the median developer at an enterprise company. This data combines measurements from enterprise-driven open source projects (e.g., Microsoft Visual Studio Code, Facebook React, Google Chromium) and commercial enterprise customers on GitClear who have opted into data sharing.

430: Daily Line Impact for a developer in the 90th percentile across all the types of companies we measure.

21,000: Total Line Impact for as of Novemeber 2020.

204,000: Total Line Impact for the note taking app "Standard Notes" across its desktop, mobile and web versions over the past 3 years.

280,000: Total Line Impact for Background Burner, a Ruby-based tool that uses OpenCV to automatically remove backgrounds from images with 70-80% success on basic images. One of Bill's weekend hobbies is working to release Background Burner as an open source automated background removal tool.

Two of the most interesting points that we're eager to fund research to understand more deeply:

link👯 At what rate does developer output scale down as team size scales up?

Our data is unambiguous on this point: the smaller the team, the greater the per-developer output that can be expected. There are several reasons this intuitively makes sense. Developers working at an enterprise company spend time on a multitude of non-programming tasks that startup developers largely bypass. Whether it's estimating sprints, assigning sprint tasks, pair programming, reviewing PRs, coordinating concurrent work on a system, training new hires, tracking down bugs/tech debt introduced by junior devs, building consensus on how to evolve the infrastructure, etc -- in so many ways, the deck is stacked against large development teams. Any technical leader at an enterprise (or mid-sized) company has to raise an eyebrow when seeing that the 90th percentile developer at a startup is expected to have code output equivalent to a team of 8 median developers working on an enterprise team. 👀

We're eager to set into motion the research that might untangle the factors behind this massive delta. If we could identify that a few enterprise practices (pair programming? Concurrent work on a system?) are responsible for the lion's share of the "large team development penalty," we could help our middle-to-large sized customers evolve their products closer to the pace of their nimble startup competition.

link🐣 Is there a minimum threshold at which companies begin to achieve product/market fit?

One of the most surprising findings discovered when looking at the four products our parent company,, has built over the last 15 years: they all hit inflection points in user adoption around the same Line Impact threshold. For us, that threshold was 500,000 Line Impact. Prior to reaching that 500k echelon (which tends to take about 2-3 years of development for a startup), our products struggled to differentiate from established competitors. We also wrestled against bugs that undermined our capacity to earn passionate advocates. Is 500k a fluke, dictated by the absence of sales and marketing endemic among Alloy products? Or can we experimentally demonstrate that there is a consistent value at which new products tend "to arrive" at their growth inflection point?

If our anecdotal results replicate, it could have profound implications for CEOs seeking to estimate how far they are from having a potent, user-beloved product. In all likelihood, there won't be a single number at which all companies hit their product/market fit; but it wouldn't be at all surprising to find that there is a minimum threshold of Line Impact at which good business outcomes ensue, relative to the type of product being built. Intuitively, the more niche the product, the lower the Line Impact threshold necessary to unlock hypergrowth. For example, maybe it only takes 10k worth of Line Impact to hit product/market fit for a Shopify store with a desirable product, but it takes 30k of Line Impact for a Chrome web extension to differentiate from its competition? Perhaps because our company is trying to build scalable, bug-free consumer products that compete against VC-backed companies, we have to hit the higher 500k threshold?

To the extent that we establish product-specific correlation between "a minimum Line Impact threshold" and "a profitable business," that could provide a consistent measuring stick for product builders in the trenches, wondering how far they'll have to travel before they see the light of profitability?

link👨‍🔬 Join us in collecting experimental data that can be shared with software builders?

Do you have a project that's rapidly approaching profitability? Or, maybe you have a team that's not moving as fast as you'd expected? We're eager to work with companies that share our enthusiasm for experimentally proving out answers to questions like those above. By connecting data-driven companies with leading software researchers, we will allow software engineering outcomes to be predicted with a statistically-supported dataset spanning thousands of projects. Drop us a line at if you know a researcher who could be a good fit to lead this push, or if you would like to learn more about how your company stacks up to the industry averages.