Skip to content

Developer's Guide to Effective AI Prompting

This guide helps developers leverage AI tools effectively in their coding workflow. Whether you're using Cursor, GitHub Copilot, or other AI assistants, these strategies will help you get better results and integrate AI smoothly into your development process.

Understanding Context Windows

Why Context Matters

AI coding assistants have what's called a "context window" - the amount of text they can "see" and consider when generating responses. Think of it as the AI's working memory:

  • Most modern AI assistants can process thousands of tokens (roughly 4-5 words per token)
  • Everything you share and everything the AI responds with consumes this limited space
  • Once the context window fills up, parts of your conversational history may be lost.

This is why providing relevant context upfront is crucial - the AI can only work with what it can "see" in its current context window.

Optimizing for Context Windows

To get the most out of AI assistants:

  1. Prioritize relevant information: Focus on sharing the most important details first.
  2. Remove unnecessary content: Avoid pasting irrelevant code or documentation.
  3. Structure your requests: Use clear sections and formatting to make information easy to process.
  4. Reference external resources: For large codebases, consider sharing only the most relevant files.

For larger projects, create and reference a central documentation file that summarizes key information, rather than repeatedly explaining the same context.

Setting Up AI Tools

Configuring Cursor Rules

Cursor Rules allow you to provide consistent context to Cursor AI, making it more effective at understanding your codebase and providing relevant suggestions.

Creating Cursor Rules

  1. Open the Command Palette in Cursor:

    • Mac: Cmd + Shift + P
    • Windows/Linux: Ctrl + Shift + P
  2. Search for "Cursor Rules" and select the option to create or edit rules

  3. Add project-specific rules that help Cursor understand your project:

  4. Save your rules file and Cursor will apply these rules to its AI suggestions

Setting Up an OnchainKit Project

To create a new OnchainKit project:

npm create onchain@latest

After creating your project, prompt to generate comprehensive documentation for your new OnchainKit project.

Creating Project Documentation

A comprehensive instructions file helps AI tools understand your project better. This should be created early in your project and updated regularly.

Ready-to-Use Prompt for Creating Instructions.md:
Create a detailed instructions.md file for my project with the following sections:
 
1. Overview: Summarize the project goals, problem statements, and core functionality.
2. Tech Stack: List all technologies, libraries, frameworks with versions.
3. Project Structure: Document the file organization with explanations.
4. Coding Standards: Document style conventions, linting rules, and patterns.
5. User Stories: Key functionality from the user perspective.
6. APIs and Integrations: External services and how they connect.

Effective Prompting Strategies

Be Specific and Direct

Start with clear commands and be specific about what you want. AI tools respond best to clear, direct instructions.

Example: ❌ "Help me with my code"
✅ "Refactor this authentication function to use async/await instead of nested then() calls"

Provide Context for Complex Tasks

Ready-to-Use Prompt:
 I'm working on a onchainkit project using [frameworks/libraries]. I need your help with:
 
1. Problem: [describe specific issue]
2. Current approach: [explain what you've tried]
3. Constraints: [mention any technical limitations]
4. Expected outcome: [describe what success looks like]
 
Here's the relevant documentation @https://docs.base.org/builderkits/onchainkit/llms.txt
 
Here's the relevant code:
[paste your code]

Ask for Iterations

Start simple and refine through iterations rather than trying to get everything perfect in one go.

Ready-to-Use Prompt:
Let's approach this step by step:
1. First, implement a basic version of [feature] with minimal functionality.
2. Then, we'll review and identify areas for improvement.
3. Next, let's add error handling and edge cases.
4. Finally, we'll optimize for performance.
 
Please start with step 1 now.

Working with OnchainKit

Leveraging LLMs.txt for Documentation

The OnchainKit project provides optimized documentation in the form of LLMs.txt files. These files are specifically formatted to be consumed by AI models:

  1. Use OnchainKit Documentation
  2. Find the component you want to implement
  3. Copy the corresponding LLMs.txt url
  4. Paste it into your prompt to provide context
Example LLMs.txt Usage:
I'm implementing a swap component with OnchainKit. Here's the relevant LLMs.txt:
 
@https://docs.base.org/builderkits/onchainkit/llms.txt
 
Based on this documentation, please show me how to implement a wallet connector that:
1. Swap from Base USDC to Base ETH.
2. Handles connection states properly.
3. Includes error handling.
4. Follows best practices for user experience.

Component Integration Example

Ready-to-Use Prompt for Token Balance Display:
I need to implement a new feature in my project.
 
1. Shows the connected wallet's balance of our {ERC20 token}.
2. It updates when the balance changes.
3. Handles loading and error states appropriately.
4. Follows our project's coding standards.
5. Update the instructions.md to reflect this new implementation.
*update the prompt a token of your choice

Debugging with AI

Effective Debugging Prompts

Ready-to-Use Prompt for Bug Analysis:
I'm encountering an issue with my code:
 
1. Expected behavior: [what should happen]
2. Actual behavior: [what's happening instead]
3. Error messages: [include any errors]
4. Relevant code: [paste the problematic code]
 
Please analyze this situation step by step and help me:
1. Identify potential causes of this issue
2. Suggest debugging steps to isolate the problem
3. Propose possible solutions
Ready-to-Use Prompt for Adding Debug Logs:
I need to debug the following function. Please add comprehensive logging statements that will help me trace:
1. Input values and their types
2. Function execution flow
3. Intermediate state changes
4. Output values or errors
 
Here's my code:
[paste your code]

When You're Stuck

If you're uncertain how to proceed:

Ready-to-Use Clarification Prompt:
I'm unsure how to proceed with [specific task]. Here's what I know:
1. [context about the problem]
2. [what you've tried]
3. [specific areas where you need guidance]
 
What additional information would help you provide better assistance?

Advanced Prompting Techniques

Modern AI assistants have capabilities that you can leverage with these advanced techniques:

  1. Step-by-step reasoning: Ask the AI to work through problems systematically
Please analyze this code step by step and identify potential issues.
  1. Format specification: Request specific formats for clarity
Please structure your response as a tutorial with code examples and explanations.
  1. Length guidance: Indicate whether you want brief or detailed responses
Please provide a concise explanation in 2-3 paragraphs.
  1. Clarify ambiguities: Help resolve unclear points when you receive multiple options
I notice you suggested two approaches. To clarify, I'd prefer to use the first approach with TypeScript.

Best Practices Summary

  1. Understand context limitations: Recognize that AI tools have finite context windows and prioritize information accordingly
  2. Provide relevant context: Share code snippets, error messages, and project details that matter for your specific question
  3. Be specific in requests: Clear, direct instructions yield better results than vague questions
  4. Break complex tasks into steps: Iterative approaches often work better for complex problems
  5. Request explanations: Ask the AI to explain generated code or concepts you don't understand
  6. Use formatting for clarity: Structure your prompts with clear sections and formatting
  7. Reference documentation: When working with specific libraries like OnchainKit, share relevant documentation
  8. Test and validate: Always review and test AI-generated code before implementing
  9. Build on previous context: Refer to earlier parts of your conversation when iterating
  10. Provide feedback: Let the AI know what worked and what didn't to improve future responses