Skip to content

Conversation

@AbhayAyare
Copy link

@AbhayAyare AbhayAyare commented Jul 30, 2025

Stricter classification prompt for DevRel triage (fixes #93)

Closes #93

📝 Description

This pull request refines the 'DEVREL_TRIAGE_PROMPT' in 'backend/app/classification/prompt.py' to enforce stricter criteria for triggering the DevRel agent. The new prompt ensures the agent is activated only for direct DevRel mentions or clear, repository-related technical queries, minimizing false positives and unnecessary agent involvement. This increases accuracy and improves the user experience.

🔧 Changes Made

  • Rewrote the DEVREL_TRIAGE_PROMPT to:
    • Require explicit mentions/tags of DevRel AI (e.g., "@Devr.AI")
    • Only trigger on direct project setup, contribution, or technical support questions
    • Explicitly instruct to ignore general messages, greetings, or untagged user conversations
    • Updated and expanded prompt examples for clarity
  • Added clarifying comments and structure to the prompt for maintainability

📷 Screenshots or Visual Changes (if applicable)

🤝 Collaboration

Collaborated with: @username (optional)

✅ Checklist

  • I have read the contributing guidelines.
  • I have added tests that prove my fix is effective or that my feature works.
  • I have added necessary documentation (if applicable).
  • Any dependent changes have been merged and published in downstream modules.

Summary by CodeRabbit

  • Documentation
    • Updated the criteria and examples in the prompt text for determining when DevRel assistance should be triggered, providing clearer guidance on when DevRel involvement is appropriate.
    • Clarified scenarios where DevRel should and should not be activated, with improved example messages and responses.

Stricter classification prompt for DevRel triage (fixes #93)
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jul 30, 2025

Walkthrough

The classification prompt for DevRel triage was rewritten to apply stricter, more explicit criteria for triggering DevRel involvement. The new prompt clarifies when to activate DevRel, updates response formatting, and revises example messages and expected outputs to align with the stricter rules. No changes were made to exported entities.

Changes

Cohort / File(s) Change Summary
Classification Prompt Update
backend/app/classification/prompt.py
Rewrote the DevRel triage prompt with stricter criteria, clarified activation rules, updated example messages and expected JSON responses. No changes to exported or public entities.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~7 minutes

Assessment against linked issues

Objective Addressed Explanation
Stricter classification prompt: Only trigger DevRel for explicit mentions or direct project-related queries (#93)
Ignore general chatter, greetings, or unrelated conversations unless bot is tagged (#93)
Update example messages and expected outputs to match new rules (#93)
Brief review and minor enhancements for clarity/conciseness of prompt suite (#93) Only the classification prompt was updated; unclear if other prompts were reviewed.

Poem

In the warren of prompts, a change hops in,
DevRel now leaps with discipline.
Only when called, or questions are clear,
The bot lends a paw, not always near.
No more rabbit holes of idle chat—
Precision is here, and I’m glad for that!
🐇✨

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a1159d7 and a6d736f.

📒 Files selected for processing (1)
  • backend/app/classification/prompt.py (1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: smokeyScraper
PR: AOSSIE-Org/Devr.AI#87
File: tests/test_supabase.py:1-3
Timestamp: 2025-06-28T23:15:13.374Z
Learning: In the Devr.AI project, smokeyScraper prefers to defer test updates and fixes (like missing imports after module reorganization) to separate PRs rather than expanding the scope of module update/chore PRs to include comprehensive test refactoring.
Learnt from: smokeyScraper
PR: AOSSIE-Org/Devr.AI#85
File: tests/test_supabase.py:1-3
Timestamp: 2025-06-28T14:45:55.244Z
Learning: In the Devr.AI project, smokeyScraper prefers to defer comprehensive test refactoring to separate PRs/efforts when doing major backend restructuring, rather than expanding the scope of the current refactoring PR to include test updates.
backend/app/classification/prompt.py (1)

Learnt from: smokeyScraper
PR: #110
File: backend/app/agents/devrel/github/tools/contributor_recommendation.py:49-49
Timestamp: 2025-07-25T23:11:27.301Z
Learning: In the Devr.AI project, smokeyScraper prefers to keep debug print statements (like print(response)) during the development phase for easier troubleshooting, with plans to remove them when moving to production rather than immediately cleaning them up.

🔇 Additional comments (4)
backend/app/classification/prompt.py (4)

1-2: Clear and direct prompt introduction.

The opening effectively establishes the classification task with concise, actionable language.


8-15: Excellent implementation of stricter classification criteria.

The explicit positive and negative conditions effectively address the false positive problem from issue #93. The structure clearly delineates when DevRel should and shouldn't be triggered, which should significantly reduce unnecessary agent involvement while maintaining coverage for legitimate technical queries.


17-22: Well-structured JSON response format.

The response structure provides all necessary fields for classification decisions with clear data types and enumerated priority options. The "ONLY with JSON" instruction helps ensure consistent parsing.


25-28: Comprehensive examples that demonstrate the classification logic.

The examples effectively cover both positive and negative cases, showing explicit bot mentions, technical issues, and non-development-related messages. They align well with the new criteria and provide clear guidance for the classification model.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@smokeyScraper smokeyScraper left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you please have a workaround for the rest of the prompts and optimize them as much as they can be? with testing on how the model outputs markdown if asked to? it's better if we enforce markdown based approach for text responses.

Respond ONLY with JSON:
{{
{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not very sure how this will affect the model response, as the model can also respond with JSON as markdown, I faced this a lot while interacting with the system. Could you please check a bit on this part? A few interactions would be awesome.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for highlighting this—it's a really important point! I've noticed as well that sometimes the model returns JSON wrapped in Markdown code blocks, which can trip up our downstream processing if we're expecting plain JSON.

To make sure the new prompt works as intended, I'll check a variety of typical user inputs—including ones where the bot is mentioned directly, general greetings, technical questions, and even some off-topic messages. I'll pay close attention to whether the model outputs just raw JSON or still adds Markdown formatting. If I see the model consistently adding code blocks, we can tweak the prompt to explicitly ask for plain JSON only.

If you have any particular examples where this was an issue, let me know and I'll make sure to include them in the checks. Otherwise, I'll report back soon with what I find and suggest any further changes if needed!

@AbhayAyare AbhayAyare closed this by deleting the head repository Aug 4, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

FEATURE REQUEST: Enhance Classification Prompt for Stricter DevRel Query Handling

2 participants