Many teams want to understand how much of their code is being authored by Claude Code and other LLM agents. Knowing the provenance of your code helps inform decision-making in several ways:

Correlate the evolution in how much of your code is being authored by developers vs. various LLMs vs. other outcomes (release cadence, defect rate, etc)

Measure the speed at which features are launched when AI use is maximized

Find opportunities to use AI better when projects are finished slower than expected

Get a repo-wide sense for what percentage of code was authored by LLMs (and could presumably be re-authored)

Understand which LLMs beget the least churn

As of Q1 2026, there isn't yet a universal standard for how to document which code is coming from LLMs, but there are rapidly evolving tactics that can be used to ensure that your team is on the leading edge of LLM measurement.


linkClaude Code

The most popular code-writing harness of early 2026 is Claude Code - as much for its curious, wary & energetic agent, as its powerful Opus LLM back-end.


Anthropic has been slower than competitors to offer direct API access to Claude Usage data. As of Q1 2026, there is an Analytics API for Anthropic, but it is only available via their little-used pay-as-you-go offering, platform.anthropic.com.


In the meantime, while Anthropic finishes implementing a useful coding analytics API to their Claude.ai Team accounts, the best option for teams that want to understand the impact Claude is having on their repo is to request for Claude itself to participate in documenting its contributions.


linkCLAUDE.md

Placing a CLAUDE.md file in the root of your repo opens the opportunity to control Claude's documentation protocol, such that code interpretation engines like GitClear can graph out answers to questions that AI usage begets. Following is a template CLAUDE.md file that covers six layers of attribution:

Inline comments on every function/method written, with task description, prompt summary, date, and model

Git commit messages with [Claude] prefix and Co-authored-by trailer

New file headers as a top-of-file block for files Claude creates from scratch

AI_CONTRIBUTIONS.md — a running log of all non-trivial tasks, great for GitClear to parse

Test file annotations in describe blocks

PR description table listing per-file authorship percentages


Copy the following and paste it into the root of your project to start generating documentation that can be interpreted by GitClear (if the pasted version contains "NBSP"):

# CLAUDE.md AI Authorship Documentation Standards
 
## Purpose
 
This project tracks AI-generated code for attribution, auditing, and code quality
research. **You MUST document your authorship in every applicable way whenever you
write or significantly modify code.**
 
---
 
## 1. Inline Code Comments
 
Add a structured comment directly above every function, class, method, or meaningful
block you write or significantly modify.
 
### Python
```python
# [Claude] Task: parse and validate incoming webhook payloads
# Prompt: "add webhook validation with signature verification"
# Date: 2025-02-21 | Model: claude-sonnet-4-6
def validate_webhook(payload: dict, signature: str) -> bool:
...
```
 
### Java
```java
// [Claude] Task: retry HTTP requests with exponential backoff
// Prompt: "add retry logic with backoff to the API client"
// Date: 2025-02-21 | Model: claude-sonnet-4-6
public Response fetchWithRetry(String url, int maxAttempts) {
...
}
```
 
### TypeScript
```typescript
// [Claude] Task: debounce search input to reduce API calls
// Prompt: "add debounce to prevent excessive API calls on keystroke"
// Date: 2025-02-21 | Model: claude-sonnet-4-6
function handleSearchInput(e: React.ChangeEvent<HTMLInputElement>): void {
...
}
```
 
### JavaScript
```javascript
// [Claude] Task: normalize API response shape across endpoints
// Prompt: "write a utility to normalize inconsistent API responses"
// Date: 2025-02-21 | Model: claude-sonnet-4-6
function normalizeResponse(data) {
...
}
```
 
### HTML / Templates
```html
<!-- [Claude] Task: render data summary table with sortable columns -->
<!-- Prompt: "create a table showing sortable records with pagination" -->
<!-- Date: 2025-02-21 | Model: claude-sonnet-4-6 -->
```
 
### CSS / SCSS
```css
/* [Claude] Task: responsive card grid layout for dashboard */
/* Prompt: "make the dashboard cards wrap gracefully on mobile" */
/* Date: 2025-02-21 | Model: claude-sonnet-4-6 */
```
 
**Rules:**
- Include this comment whenever you write new logic or substantially rewrite existing code
- "Substantially rewrite" means more than ~50% of the logic changed
- For minor edits (typo fixes, variable renames, comment updates), omit the comment
- If you later modify a function you previously authored, update the date
 
---
 
## 2. New File Headers
 
When you create a new file entirely, add a header block at the very top before any
imports or code.
 
### Python
```python
# =============================================================================
# [Claude-authored file]
# Created: 2025-02-21 | Model: claude-sonnet-4-6
# Task: Webhook validation utilities
# Prompt summary: "build webhook signature verification for incoming events"
# =============================================================================
```
 
### Java
```java
/*
* [Claude-authored file]
* Created: 2025-02-21 | Model: claude-sonnet-4-6
* Task: HTTP client with retry and backoff logic
* Prompt summary: "build a robust HTTP client wrapper with retries"
*/
```
 
### TypeScript / JavaScript
```typescript
/**
* [Claude-authored file]
* Created: 2025-02-21 | Model: claude-sonnet-4-6
* Task: Search input component with debounce
* Prompt summary: "create a reusable debounced search input component"
*/
```
 
---
 
## 3. Git Commit Messages
 
Every commit message you draft or suggest must include:
 
1. A `[Claude]` tag on the subject line
2. A short body describing what was generated and the originating prompt
3. A list of files you authored
4. The `Co-authored-by` trailer
 
**Format:**
```
[Claude] Add webhook signature validation
 
Generated by Claude (claude-sonnet-4-6).
Prompt: "add webhook validation with signature verification"
Files authored: src/webhooks/validator.py, tests/test_validator.py
 
Co-authored-by: Claude <claude@anthropic.com>
```
 
For commits that mix human and Claude-authored changes, use `[Mixed]` and note which
files each party wrote:
 
```
[Mixed] Add webhook validation and refactor auth middleware
 
Claude-authored: src/webhooks/validator.py, tests/test_validator.py
Human-authored: src/middleware/auth.py
 
Co-authored-by: Claude <claude@anthropic.com>
```
 
---
 
## 4. Test File Annotations
 
When you write tests, add a comment at the top of the outermost test block.
 
### Python (pytest)
```python
# [Claude] Generated tests for: webhook signature validation
# Date: 2025-02-21 | Model: claude-sonnet-4-6
class TestWebhookValidator:
...
```
 
### Java (JUnit)
```java
// [Claude] Generated tests for: HTTP client retry behavior
// Date: 2025-02-21 | Model: claude-sonnet-4-6
class ApiClientRetryTest {
...
}
```
 
### TypeScript / JavaScript (Jest, Vitest)
```typescript
// [Claude] Generated tests for: debounced search input component
// Date: 2025-02-21 | Model: claude-sonnet-4-6
describe('SearchInput', () => {
...
});
```
 
---
 
## 5. AI_CONTRIBUTIONS.md Log
 
Maintain an `AI_CONTRIBUTIONS.md` file in the repo root. After completing any
non-trivial task, append an entry using this format:
 
```markdown
## 2025-02-21 — Webhook signature validation
 
**Model:** claude-sonnet-4-6
**Files created/modified:**
- `src/webhooks/validator.py` (created)
- `tests/test_validator.py` (created)
- `src/config/settings.py` (modified — added webhook secret config key)
 
**Task:** Validate incoming webhook payloads using HMAC signature verification
**Prompt summary:** "add webhook validation with signature verification"
**Scope:** ~90 lines of new logic across 3 files
**Notes:** Uses HMAC-SHA256; secret must be set in environment before deploying
```
 
Add an entry for any task involving more than a few lines of logic. Err on the side
of over-documenting these records are used for code quality and attribution research.
 
---
 
## 6. Pull Request Descriptions
 
When drafting a pull request description, always include an **AI Authorship** section:
 
```markdown
## AI Authorship
 
This PR includes code generated by Claude (claude-sonnet-4-6).
 
| File | Authorship |
|------|------------|
| src/webhooks/validator.py | Claude (100%) |
| tests/test_validator.py | Claude (100%) |
| src/config/settings.py | Mixed (~20% Claude) |
 
**Prompt summary:** "add webhook validation with signature verification"
```
 
---
 
## Quick Reference
 
| Situation | Required documentation |
|-----------|------------------------|
| Write a new function or method | Inline `[Claude]` comment above it |
| Create a new file | File header block + `AI_CONTRIBUTIONS.md` entry |
| Substantially modify an existing function | Inline `[Claude]` comment above it |
| Write tests | `[Claude]` comment in the outermost test block |
| Draft a commit message | `[Claude]` prefix + `Co-authored-by` trailer |
| Draft a PR description | AI Authorship table |
 
When in doubt, over-document. These records are used for ongoing research into AI's
impact on code quality and long-term maintainability.


linkAGENTS.md

Another file that can be placed in the root of your repo to provide a system prompt instructions for compliant LLMs. If you're using Claude.md, you can create AGENTS.md as a symlink to Claude, or you can specify more guidelines in this file to help LLMs write code that matches your repo conventions.