Every file you send to an AI costs money. CodeWalker shows you exactly how much each file costs in tokens before you send it, so you can make smarter decisions about what goes into your prompt and what stays out.
When you paste code into an AI assistant or send files through an API, you're charged by the token. Every character, every comment, every blank line, every import statement gets tokenized and billed. But most developers have no idea how many tokens a file actually costs until after they've already sent it and the bill arrives.
The result is predictable. People dump entire files into a prompt when they only needed one function. They include a 15,000-token utility file to ask a question about a 200-token helper. They send five files when three would have been enough context. They hit the context window limit and have to re-send with fewer files, paying for the failed attempt too.
This isn't a rounding error. At scale, it's real money.
Here's what the major AI providers charge per million input tokens right now:
| Model | Input Cost (per 1M tokens) | Output Cost (per 1M tokens) |
|---|---|---|
| Claude Opus 4.7 | $5.00 | $25.00 |
| Claude Sonnet 4.6 | $3.00 | $15.00 |
| Claude Haiku 4.5 | $1.00 | $5.00 |
| GPT-4o | $2.50 | $10.00 |
| GPT-4o mini | $0.15 | $0.60 |
These numbers look small until you multiply them by thousands of requests. A developer making 50 API calls a day, each with 10,000 tokens of context, is sending 500,000 input tokens per day. On Claude Sonnet, that's $1.50 a day just in input costs -- about $45 a month. On Opus, it's $75 a month.
Now imagine a team of ten developers doing the same thing. That's $450 to $750 a month in input tokens alone, before you count output tokens. And that's assuming everyone is being careful about what they send. Most people aren't.
Sending whole files when you need one function. A 400-line file might tokenize to 8,000 tokens. The function you're asking about might be 40 lines and 600 tokens. That's 7,400 tokens wasted -- per request.
Including files "just in case." When you're not sure which files the AI needs for context, the safe move feels like including all of them. But if two of those five files weren't needed, you just paid for 40% more tokens than necessary.
Not knowing which files are expensive. A 200-line file with dense logic might cost 4,000 tokens. A 500-line file that's mostly comments and whitespace might cost 3,000. Without measuring, you'd guess wrong about which one costs more.
Sending images without checking their token cost. AI APIs charge for images based on pixel dimensions, not file size. A 4K screenshot might cost over 6,000 tokens -- more than most code files. A small cropped version might cost 300. If you don't know the difference, you're overpaying by 20x.
Re-sending after hitting the context limit. When your prompt is too large, the API rejects it, but you've already paid for the tokens it processed before failing. Then you trim and re-send, paying again. Two attempts at a prompt that was 20% too large means you've paid for 120% of the final cost instead of 100%.
CodeWalker's Recon view gives you three tools that make token costs visible before you spend a cent.
The Token Cost Overlay. Click the button in the sidebar and every file and folder in your project gets a label showing its exact token count. Not an estimate -- a real BPE tokenizer count, the same algorithm the APIs use. Folders show the sum of everything inside them, rolling all the way up to the root. You can see at a glance that your utils/ folder costs 45,000 tokens and your components/ folder costs 12,000.
Token Towers. Click the button and every file becomes a 3D column. Tall red towers are expensive files. Short white columns are cheap ones. You don't even need to read the numbers -- the visual immediately tells you where the weight is. Scan your entire project in seconds and know exactly which files are worth sending and which ones would blow your budget.
The Token Cost Sort. Click in the SORTING section and CodeWalker arranges every file in your project by token cost, most expensive at the center, cheapest at the edges. Folders line up in a front row sorted by total cost. The layout itself becomes a cost ranking.
All three of these work on images too. CodeWalker estimates image token costs using the same width-times-height-divided-by-750 formula that the Claude API uses. Binary files like .dll and .exe are excluded entirely so the numbers stay meaningful.
You're debugging an issue and want to ask Claude about it. You paste in three files: the component with the bug (2,100 tokens), the utility library it imports (11,400 tokens), and the config file (800 tokens). Total: 14,300 input tokens.
But the bug is in a single function. If you'd known the utility library was 11,400 tokens, you might have pasted just the relevant function from it (350 tokens) instead of the whole file. That drops your input to 3,250 tokens -- a 77% reduction.
Do this three times a day on Claude Sonnet, and you save about $1 a day. Over a year, that's $365 per developer.
You want an AI to review a pull request that touches 8 files. You send all 8 files in full. Total: 62,000 tokens. Cost on Claude Opus: $0.31 per request.
CodeWalker's token overlay shows you that 3 of those files are just test fixtures and config -- 28,000 tokens of context that adds nothing to a code review. Drop them, and you're at 34,000 tokens. Cost: $0.17. You just saved 45% per review.
A team doing 20 AI-assisted code reviews a week saves $145 a month on Opus.
You're sending UI screenshots to an AI for feedback. A full-resolution 1920x1080 screenshot costs about 2,765 tokens. A 4K screenshot costs over 11,000 tokens. But a cropped 800x600 region of the relevant UI element costs just 640 tokens.
If you're sending 10 screenshots a day without cropping, you might be spending 110,000 tokens on images alone. Cropping to the relevant area drops that to about 6,400 tokens. On GPT-4o, that's the difference between $0.28 and $0.02 per day -- a 93% reduction.
Typical input token reduction when developers can see file costs before sending them to an AI.
Token savings compound in ways that aren't obvious at first.
When you send fewer input tokens, the AI processes your request faster. Faster responses mean shorter wait times. Shorter wait times mean you stay in flow instead of context-switching. That's a productivity gain on top of the cost savings.
Smaller prompts also tend to produce better results. When you include only the relevant files, the AI doesn't get distracted by irrelevant code. It focuses on what matters. Fewer hallucinations, more accurate answers, less back-and-forth -- which means fewer follow-up requests, which means even fewer tokens spent.
And when you can see that a folder costs 85,000 tokens, you start making architectural decisions differently. Maybe that monolithic utility file should be split into smaller, focused modules. Not just for clean code reasons, but because you'll be sending pieces of it to an AI every day, and sending 2,000 tokens instead of 12,000 tokens matters.
CodeWalker doesn't just save you money on today's API call. It changes how you think about the cost structure of your codebase.
Load any project into CodeWalker and open the Recon 3D view. The token tools are right there in the sidebar.
Click to see the token count on every file and folder. Click to see the visual heatmap of file costs. Use the token sort to rank everything by cost.
Before your next API call, glance at the token counts of the files you're about to send. Ask yourself: does the AI actually need all of this, or can I send just the relevant parts? The answer will save you money every single time.
Every token you don't send is money you keep. CodeWalker makes the invisible cost of your codebase visible.