Introduction

As AI-powered development tools like GitHub Copilot and Claude Code become integral to our daily workflows, I’ve found that the quality of our interactions with these tools directly impacts our productivity. Rather than starting from scratch with every project or relying on generic prompts, I’ve been building a structured prompt library to help configure AI assistants for specific development scenarios. Today, I’m excited to share this prompt library repository and the thinking behind it.

Why Build a Prompt Library?

Over the past year of working with AI coding assistants, I’ve noticed a pattern: the most productive sessions happen when the AI has clear context about what I’m building, the patterns I’m following, and the standards I expect. Rather than repeatedly explaining these preferences, a well-structured prompt library provides:

  1. Consistent Context Across Projects: Define once, reuse everywhere
  2. Scenario-Specific Guidance: Different projects need different approaches
  3. Team Alignment: Share prompts to ensure everyone follows the same patterns
  4. Continuous Improvement: Refine prompts based on real-world results
  5. Knowledge Preservation: Capture best practices as they emerge

Structure and Organization

The library is organized around development scenarios rather than tools, making it easier to find relevant prompts regardless of which AI assistant you’re using:

prompt-library/
├── scenarios/          # Scenario-specific resources
├── shared/            # Cross-scenario resources
└── examples/          # Real-world usage examples

Current Scenarios

The library currently includes four primary scenarios that cover most of my development work:

1. .NET Clean Architecture

Prompts and configurations for building enterprise .NET applications following Clean Architecture principles, Domain-Driven Design, and SOLID patterns. This scenario is particularly useful when:

  • Building enterprise applications with complex business logic
  • Implementing DDD aggregates and entities
  • Following Clean Architecture layering
  • Writing comprehensive unit tests with TDD

2. Python Data Science

Resources tailored for data analysis, machine learning, and scientific computing projects. Use this scenario when:

  • Working with pandas, numpy, and scikit-learn
  • Building ML models and pipelines
  • Performing data exploration and visualization
  • Developing in Jupyter notebooks

3. TypeScript Frontend

Tools for modern frontend development with JavaScript/TypeScript frameworks. This scenario helps with:

  • Building React, Vue, or Angular applications
  • Working with TypeScript type systems
  • Component development and testing
  • State management and API integration

4. DevOps Infrastructure

Prompts focused on infrastructure as code, CI/CD, and deployment automation:

  • Writing Terraform, CloudFormation, or other IaC
  • Setting up CI/CD pipelines
  • Container and Kubernetes configuration
  • Deployment and monitoring scripts

Shared Resources

The shared/ directory contains general-purpose resources that apply across scenarios:

  • prompts/: Common tasks like code review, debugging, and documentation
  • templates/: Reusable prompt templates for standardization
  • snippets/: Frequently used prompt fragments

Quick Start: Common Prompts

For those just getting started with prompt engineering, I’ve included several ready-to-use prompts for everyday development tasks:

Code Review Prompt

Review the following code for:
- Code quality and readability
- Best practices and design patterns
- Potential bugs or edge cases
- Performance considerations
- Security vulnerabilities
- Test coverage gaps

[Paste your code here]

Debugging Prompt

I'm encountering the following issue:

**Problem:** [Describe the bug/error]
**Expected behavior:** [What should happen]
**Actual behavior:** [What's actually happening]
**Error messages:** [Any error output]

**Code:**
[Paste relevant code]

**Context:**
[Environment, dependencies, recent changes]

Please help me:
1. Identify the root cause
2. Suggest a fix
3. Explain why this is happening
4. Recommend how to prevent similar issues

These templates provide structure while remaining flexible enough to adapt to specific situations.

Tool-Specific Configurations

The library includes configurations for both Claude Code and GitHub Copilot:

For Claude Code

Copy instructions from scenarios/{scenario}/claude-code/instructions.md to your Claude Code configuration, or use slash commands from the commands/ folder for quick access to specific functionality.

For GitHub Copilot

Copy content from scenarios/{scenario}/github-copilot/instructions.md to .github/copilot-instructions.md in your project root. This provides project-specific context to Copilot across your entire team.

Real-World Impact

Since building this library, I’ve noticed several improvements in my AI-assisted development workflow:

Faster Onboarding: When starting a new project, I can quickly configure my AI assistant with the appropriate scenario, eliminating the need to repeatedly explain architectural patterns or coding standards.

Better Code Quality: Scenario-specific prompts that emphasize best practices have led to more consistent, higher-quality code suggestions that align with established patterns.

Reduced Context Switching: Having predefined prompts for common tasks (code review, debugging, documentation) means less time crafting the perfect prompt and more time actually developing.

Team Consistency: When working with colleagues, sharing scenario configurations ensures everyone’s AI assistant provides suggestions that align with team standards.

The Broader Ecosystem

This library stands on the shoulders of some excellent community resources. I’ve been inspired by and learned from:

These resources provide broader coverage and different perspectives that complement the scenario-focused approach in my library.

Looking Forward

This prompt library is a living project that evolves with my development practices and the AI tools themselves. I’m planning to add:

  • Additional scenarios (mobile development, embedded systems, etc.)
  • More advanced agent definitions and workflows
  • Team collaboration patterns and examples
  • Integration guides for additional AI tools

Get Started

The prompt-library repository is available on GitHub for anyone to use and adapt. Whether you’re working with Claude Code, GitHub Copilot, or other AI development tools, I hope you’ll find value in the structured approach to prompt engineering.

I encourage you to:

  1. Explore the repository and try the scenarios that match your work
  2. Adapt the prompts to your specific needs and team standards
  3. Share feedback or contribute improvements

Conclusion

As AI-powered development tools continue to mature, the ability to effectively communicate with these assistants becomes increasingly important. A well-organized prompt library isn’t just about saving time—it’s about establishing a systematic approach to AI collaboration that improves over time.

By treating prompt engineering as a first-class development practice, with version control, documentation, and continuous refinement, we can unlock the full potential of AI-assisted development while maintaining the quality and consistency our projects demand.

What prompts or scenarios would you add to a library like this? I’d love to hear about the patterns and practices that work best in your development workflow.