Back to blog
·4 min read

Your team is probably using GitHub Copilot wrong. Here's what actually works.

Most teams already have Copilot. The real difference is whether they use it as a shared workflow or just another chat box.

ai-toolsgithub-copilotworkflow

Most companies that buy GitHub Copilot end up underusing it.

Copilot is probably the most common AI coding subscription in enterprise environments right now, which means a lot of teams already have access to it. The problem is that almost nobody gets trained on how to use it properly.

People open the Copilot chat panel in VS Code, type vague instructions, get mediocre output, and repeat the same cycle across every module and every file. No structure, no consistency, no compounding return on the tool.

That is a waste of a powerful subscription.

The problem is not the model

When a team says Copilot is underwhelming, the issue usually is not that the model is bad. The issue is that everyone is using it as an isolated assistant instead of part of a shared engineering workflow.

Each developer writes their own ad-hoc prompts. Each person explains the codebase differently. Each person rediscovers the same context that should already be documented somewhere. The result is exactly what you'd expect: inconsistent output, repetitive mistakes, and a lot of time wasted re-explaining the same things.

If you want better results, stop treating Copilot like a magic chat box and start treating it like a system that needs structure.

Prompts and skills should live in the repo

What actually worked for me was centralizing the AI instructions inside the codebase itself. I defined two things: prompts and skills.

Prompts are the what. They tell Copilot what task to perform.

Skills are the how. They describe the steps, patterns, constraints, and conventions Copilot should follow while doing that task.

Together, those two things give the model enough context to produce useful output instead of generic guesses.

The important part was not just writing them once. The important part was keeping them in the repository as a shared source of truth. Everyone on the team used the same instructions. That immediately removed a lot of wasted effort and inconsistency.

Instead of every developer starting from a blank chat box, they started from a documented workflow.

A real example: Angular 8 to 16 migration

I used this approach during an Angular 8 to Angular 16 migration. It was not a narrow refactor. The migration touched TypeScript, templates, CSS, config files, dependencies, and Angular Material from version 8 to 16.

That kind of work is exactly where unstructured AI usage falls apart. If every engineer asks for changes in a slightly different way, you get slightly different output everywhere. On a migration, those small inconsistencies pile up fast.

So I formalized the workflow.

I wrote prompts for each stage of the migration. I wrote skills that explained how each type of change should be handled. Then I created a separate review prompt whose only job was to inspect the AI-generated changes and verify whether they were actually correct.

That last part matters. AI-generated code should not flow straight into the branch without a second pass. A review prompt gave me a consistent way to check the output instead of relying on whatever each person happened to notice.

This is just one example. Your team's workflow might look completely different. Maybe you're modernizing a legacy frontend, writing tests around an old service layer, or generating repetitive internal CRUD work. The specifics do not matter as much as the exercise itself.

Sit down as a team and ask a simple question: what does this team actually need AI for?

Once you answer that, the useful prompts become much easier to write.

What you should actually do

If your team already pays for Copilot, this is the practical starting point:

  1. Define the workflow first. Start with the problem, not the tool. Figure out where AI fits into your existing engineering process before you write a single prompt.
  2. Write prompts and skills and store them in the repo. Keep one shared source of truth. If AI is making changes to the codebase, the instructions behind those changes should be versioned too.
  3. Use /init early. /init gives Copilot a starter set of project instructions based on your repository, which is much better than starting every chat from a blank box. The official VS Code guide on Copilot best practices recommends it, and it was a big win for me because it gave me something concrete to refine instead of rebuilding context in every conversation.
  4. Iterate per project. The workflow that works for a migration will not be the same one that works for new feature development. Treat prompts and skills like project assets, not universal templates.

The bottom line

The difference between teams that get value from Copilot and teams that do not is usually not the subscription tier. It is whether they put in the upfront work to define structured, reusable instructions.

If your team already has Copilot, do not spend the next month debating whether the tool is good. Spend one afternoon defining the workflow, writing the prompts, and documenting the skills in the repo.

That will tell you more about Copilot's actual value than another hundred vague chat messages ever will.