Many teams want to understand how much of their code is being authored by Claude Code and other LLM agents. Knowing the provenance of your code helps inform decision-making in several ways:
Correlate the evolution in how much of your code is being authored by developers vs. various LLMs vs. other outcomes (release cadence, defect rate, etc)
Measure the speed at which features are launched when AI use is maximized
Find opportunities to use AI better when projects are finished slower than expected
Get a repo-wide sense for what percentage of code was authored by LLMs (and could presumably be re-authored)
Understand which LLMs beget the least churn
As of Q1 2026, there isn't yet a universal standard for how to document which code is coming from LLMs, but there are rapidly evolving tactics that can be used to ensure that your team is on the leading edge of LLM measurement.
GitClear uses a plethora of different heuristics to ascertain whether code was AI-authored:
Presence of Assisted-by: Model name in the commit message indicates that 30% of the changed code lines should be attributed to AI
Presence of Co-authored-by: Model name in the commit message indicates that 50% of the changed code lines should be attributed to AI
Presence of Generated-by: Model name in the commit message indicates that 100% of the changed code lines were authored by the model
Presence of gpt-4o@openai.com or another [model name]@[ai provider] as the commit author indicates the entire set of changes should be ascribed to the model that authored the commit
Commit message that begins with a model name, like Opus-4.6 Updates to documentation, will ascribe all changed lines to the recognized LLM model
Presence of Generated with [Claude Code Opus 4.5] will associated the changed lines with the model referenced in the brackets
Agent trace metadata blocks in commit message. Example case for inserting a JSON block in commit message to relate information about which AI authored this commit.
In the documentation for a method, if a model is committed (e.g., # Claude Opus 4.5 generated this method) in the documentation preceding the function/method, the lines from that method are designated as "generated by the LLM model" for this commit
If an AI_CONTRIBUTIONS.md file is created (as requested by the CLAUDE.md file below), that file will be scanned for any new additions. If a new contribution has been listed, GitClear will analyze it for any file mention, attributing changed lines to that file if it exists. If no specific file is referenced, then the model referenced in AI_CONTRIBUTIONS.md will ascribe all changed lines from the commit to the model mentioned in the new "contributions" block
Note that GitClear is agnostic about the formatting of the LLM model name. Claude Opus could be equivalently referenced as Claude Opus, claude-opus-4.6, opus-4.6 or Opus 4.6.
Each LLM has different guidelines for how to set up repo files that direct the LLM to document what work was AI-authored. Below are suggested files to add to the project to optimize your measurement of "How much work came from LLMs?"
The most popular code-writing harness of early 2026 is Claude Code - as much for its curious, wary & energetic agent, as its powerful Opus LLM back-end.
Anthropic has been slower than competitors to offer direct API access to Claude Usage data. As of Q1 2026, there is an Analytics API for Anthropic, but it is only available via their little-used pay-as-you-go offering, platform.anthropic.com.
In the meantime, while Anthropic finishes implementing a useful coding analytics API to their Claude.ai Team accounts, the best option for teams that want to understand the impact Claude is having on their repo is to request for Claude itself to participate in documenting its contributions.
Placing a CLAUDE.md file in the root of your repo opens the opportunity to control Claude's documentation protocol, such that code interpretation engines like GitClear can graph out answers to questions that AI usage begets. Following is a template CLAUDE.md file that covers six layers of attribution:
Inline comments on every function/method written, with task description, prompt summary, date, and model
Git commit messages with [Claude] prefix and Co-authored-by trailer
New file headers as a top-of-file block for files Claude creates from scratch
AI_CONTRIBUTIONS.md — a running log of all non-trivial tasks, great for GitClear to parse
Test file annotations in describe blocks
PR description table listing per-file authorship percentages
Copy the following and paste it into the root of your project to start generating documentation that can be interpreted by GitClear (if the pasted version contains "NBSP"):
Unlike CLAUDE.md, this file is one that can be placed in the root directory of your repo with the expectation that any and every LLM will use it, effectively as the system prompt.
If you're using Claude.md, you can create AGENTS.md as a symlink to Claude, or you can specify more guidelines in this file to help LLMs write code that matches your repo conventions.
The contents of AGENTS.md can match the suggested contents of Claude.md