GitClear offers AI usage metrics for three of the largest AI coding providers:
GitHub Copilot: Per-team and per-resource stats on "suggested prompt" vs "accepted" over time. Per-language metrics on Copilot efficacy. Per-LLM-model metrics on Copilot efficacy.
Cursor IDE: Per-team, per-developer and per-resource stats on "suggested" vs "accepted," plus "lines of code added" vs "deleted," "bugbot usage," "chat vs agent requests," and more
Anthropic/Claude Code: Anthropic offers a couple different APIs for gathering data, depending on whether the developer uses Claude Code or not. Stats similar to GitHub and Cursor are available for Anthropic.
To connect to an AI provider, visit the "Settings" tab for your resource and choose "AI Usage"

Under the "AI Impact" tab and "AI Usage" sub-tab, you'll find a variety of charts showing the extent to which the team has utilized LLMs during the selected time range.
When a developer explicitly prompts the AI with a question, what percentage of those interactions result in the developer utilizing the response they were given? What about when the AI makes a suggestion by showing lightened text that can be inserted by pressing tab? GitClear measures both, on a per-model (e.g., Claude Sonnet vs ChatGPT vs Gemini) basis:

This data is ideal for helping to disseminate within a team which LLMs have been producing the most applicable results over the past month. It is available via the Reporting API as ai_prompt_acceptance_percent and ai_tab_acceptance_percent. The per-model data is available to all AI providers except the default GitHub Copilot business tier, which labels all LLM acceptances as "Default Model" as of Q4 2025.
As of Q4 2025 Github Copilot does not return per-language stats, but Cursor and Claude Code do. |
For users of Github Copilot, it is still possible to see the overall counts for how any AI usage stat is trending over time; it just isn't possible to give the full richness of model comparison that is possible from AI APIs that return this data.
Calculating the number of suggestions per AI provider requires applying judgement for what, exactly, counts as a "suggestion."

If you don't want to get lost in the weeds, simply think about "Tab Suggestion Count" as "How many times did the AI suggest code the developer could press 'tab' to insert as code?"

Think about "Prompt Suggestion Count" as "How many times did the AI make suggestions to the developer, outside its suggestions on how to finish the line the developer is typing?"
In the Reports API, the count of ad hoc suggestions the AI made to the developer as they were writing code is known as tab_prompt_count. The sum of suggestions made to the developer in all other contexts is approximated as non_tab_prompt_count. These numbers can be multiplied by the acceptance rate to get the number of accepted suggestions for "tabs" and "prompts."
In terms of specific technical details, Copilot derives the "Prompt Suggestion Count" by summing total_code_suggestions per-editor, per-model. Cursor gets it by summing together "chat requests," "composer requests," "agent requests," "applied suggestions," "accepted suggestions," and "rejected suggestions" to get a grand total of requests (=> suggestions) and opportunities the user had to accept a suggestion.
One of the most oft-reported drawbacks of AI is its propensity to duplicate code. That's what makes it useful to keep an eye on how actual line changes are playing out among your teammates, on the "AI Impact & Usage Stats" tab

How many lines have been deleted by each of the LLMs used by the team lately?
Typically, the "Lines added" are around 5x the "Lines deleted" for LLMs of the mid-2020s. Some day hopefully they will better figure out how to effectively recommend code deletion opportunities, since it is such a key to long-term repo health.

These segments are available in the Reports API as ai_lines_added_count and ai_lines_deleted_count.
Note that there is no guarantee that the developer will go on to commit the lines that they accepted from an AI. In fact, as often as not, developers will accept a block of code in order to get it into a state where they can start picking it apart -- by deleting large swaths that are non-applicable, or by making large modifications to the initially inserted lines.
How many developers on the team are regularly using AI vs not? These metrics help understand the adoption trend for AI across the team. The Reports API segments for these are ai_engaged_committer_count and ai_inactive_committer_count

The calculation of "Engaged" committers varies by AI provider. Since Github Copilot does not, by default, report the total number of active users per language (all of their per-language data reports "Engaged" users), we periodically query Github to assess when each developer was last active with AI. For all time after the customer has signed up, this allows us to derive a maximally accurate count of the number of developers that are participating in Copilot use in a given day or week. For historical data, we fall back to what Github reports as the "Engaged" user count, which they differentiate from "Active Users" by explaining in their documentation "A stricter version of 'Active Users,' this tracks the number of employees who use a tool multiple days per month. The exact number should depend on your company’s definition of what an engaged user should be. A growing number of "Engaged" indicates that users are moving beyond initial experimentation and are beginning to form a habit."
The derivation of "Engaged AI Developers" for Cursor is more simple, as they report all stats on a per-developer basis, so we can simply evaluate over the history of AI usage how many developers were active Cursor users per-day or per-week.
To calculate "Inactive AI users," we subtract the count of seated committers in a given time period by the number of users that were deemed to be "Engaged AI users"
When the AI API reports a cost ascribed to the developer's requests, it will be returned as ai_cost_cents via API. This does not include the base subscription cost that typically makes up the bulk of the monthly invoice for business customers of Copilot, Cursor and Claude, which is why it is not included among the default stats presented on the AI Usage tab.
The all_suggestion_count aggregates all of the request & suggestion metrics available per provider. It can be thought of as the "global barometer for how much AI use is occurring"? It combines all tab and prompted suggestions.

This is the best overall gauge of the extent to which a team has actively applied AI to advance their development.