Transform vague prompts into clear, actionable instructions - promptly and thoughtfully
Promptly is an advanced Language-of-Thoughts (LoT) based prompt enhancement system that transforms vague, ambiguous prompts into clear, specific, and actionable instructions. Using a sophisticated multi-agent architecture powered by CrewAI, Promptly analyzes, expands, and refines your prompts to achieve dramatically better AI responses.
In the era of AI-powered tools, the quality of your output is directly tied to the quality of your input. However, most users struggle with:
- Vague language: "Write a good email" - what defines "good"?
- Missing context: "Analyze this data" - which aspects? what format?
- Ambiguous references: "Make it better" - better how? by what criteria?
- Irrelevant noise: Including unnecessary information that confuses AI models
These issues lead to suboptimal AI responses, requiring multiple iterations and wasted time.
Promptly applies Language-of-Thoughts (LoT) methodology - a systematic approach to prompt analysis and enhancement that:
- Identifies ambiguities in your original prompt
- Expands vague concepts with specific, measurable criteria
- Removes irrelevant noise that distracts from your core intent
- Synthesizes everything into an optimized, clear prompt
Result: Get better AI responses on the first try, save time, and achieve your goals more effectively.
Our approach is based on cognitive science research about how humans process and clarify thoughts. We've adapted these principles for AI prompt optimization:
OBSERVE ๐ - Systematic Analysis
- Extract entities, constraints, and intent
- Detect L-implicit issues (local ambiguities like "good", "quickly")
- Detect Q-implicit issues (contextual noise, irrelevant information)
EXPAND ๐ - Intelligent Clarification
- Replace vague terms with specific, measurable criteria
- Add missing context using domain knowledge
- Enhance completeness while preserving original intent
ECHO ๐ - Focused Refinement
- Remove irrelevant noise and distractions
- Prioritize core information and requirements
- Maintain clarity and actionability
SYNTHESIZE โจ - Optimal Reconstruction
- Combine processed information into the perfect prompt
- Structure for maximum AI model comprehension
- Ensure clarity, specificity, and actionability
Promptly employs specialized AI agents, each expert in one aspect of prompt enhancement:
- Observer Agent: Analyzes prompt structure and identifies issues
- Expander Agent: Clarifies ambiguities using knowledge base
- Echo Agent: Removes noise and focuses on essentials
- Synthesizer Agent: Creates the final optimized prompt
- Manager Agent: Orchestrates the entire process intelligently
- Advanced LoT Processing: Systematic prompt analysis and enhancement
- Multi-Agent System: Specialized AI agents for different enhancement tasks
- Multiple Processing Modes: Fast (5s), Standard (15s), Comprehensive (45s)
- Real-time Enhancement: WebSocket support for live prompt optimization
- Batch Processing: Handle multiple prompts efficiently
- Two-Layer Evaluation: Intrinsic (prompt quality) + Extrinsic (task performance)
- LLM-as-Judge: Automated quality assessment
- Comprehensive Metrics: Clarity, completeness, relevance scoring
- Benchmark Testing: Validated against standardized test cases
- FastAPI Backend: High-performance async API
- Authentication System: JWT tokens, API keys, role-based access
- MCP Server Support: Model Context Protocol for AI tool integration
- Docker Deployment: Complete containerized setup
- Monitoring & Analytics: Real-time performance tracking
- Save Time: Get better AI responses on the first try
- Improve Quality: Achieve more precise, actionable outputs
- Learn Better Prompting: Understand what makes prompts effective
- Reduce Frustration: Eliminate the trial-and-error cycle
- Increase Productivity: Teams spend less time refining prompts
- Standardize Quality: Consistent prompt quality across organization
- Reduce AI Costs: Fewer API calls needed to get desired results
- Scale AI Adoption: Make AI tools more accessible to non-experts
- API Integration: Easy integration into existing workflows
- MCP Protocol: Standard interface for AI tool ecosystems
- Customizable: Extend with domain-specific knowledge
- Open Source: Transparent, auditable, and improvable
Before (Vague):
"Write a good email to my boss about the project"
After Promptly Enhancement:
"Write a professional email to your supervisor with the following structure:
Subject: Clear, specific subject line about project status update
Content Structure:
- Formal greeting with supervisor's name
- Purpose: Clearly state you're providing a project update
- Current Progress: Specific milestones achieved with dates
- Key Accomplishments: 2-3 concrete achievements this week
- Challenges (if any): Brief description with proposed solutions
- Next Steps: Clear action items with timelines
- Request: Any support or decisions needed from supervisor
Tone: Professional, confident, solution-oriented
Length: 150-250 words for optimal readability
Format: Business email with proper greeting and closing"
Result: The enhanced prompt produces significantly better, more structured, and actionable email content.
- LoT methodology research and theoretical design
- Multi-agent system architecture planning
- Comprehensive implementation documentation
- Core LoT processor implementation
- Basic multi-agent system with CrewAI
- Fundamental prompt enhancement pipeline
- Initial evaluation framework
- Complete LoT processor with NLP components
- Observer, Expander, Echo, Synthesizer agents
- Sequential workflow implementation
- Basic CLI interface
- Unit testing framework
- Performance benchmarking
- FastAPI backend implementation
- Authentication and user management
- REST API endpoints for prompt enhancement
- Real-time WebSocket enhancement
- MCP server integration
- Docker deployment setup
- Domain-specific knowledge bases
- Custom agent training capabilities
- Advanced analytics dashboard
- Batch processing optimization
- Team collaboration features
- API rate limiting and optimization
- Enterprise features and SLA
- Third-party integrations (Slack, Discord, etc.)
- Mobile applications
- Community marketplace for custom agents
- Advanced AI model support
- Multi-language support
- Voice-to-prompt enhancement
- Industry-specific prompt templates
- AI prompt generation from goals
- Collaborative prompt engineering platform
- Advanced reasoning and context understanding
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โ Observer Agent โ โ Expander Agent โ โ Echo Agent โ
โ (OBSERVE) โโโโโถโ (EXPAND) โโโโโถโ (ECHO) โ
โ Analyze prompt โ โ Clarify ambig. โ โ Remove noise โ
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโ
โSynthesizer Agentโ
โ (SYNTHESIZE) โ
โ Create final โ
โ enhanced prompt โ
โโโโโโโโโโโโโโโโโโโ
- Backend: FastAPI, Python 3.11+
- AI Framework: CrewAI for multi-agent orchestration
- NLP: spaCy, Sentence Transformers
- Database: PostgreSQL, Redis
- Deployment: Docker, Nginx
- Monitoring: Prometheus, Grafana
We welcome contributions from the community! Whether you're interested in:
- Core Algorithm Improvements: Enhance LoT methodology
- New Agent Types: Develop specialized enhancement agents
- Integration Support: Add new platform integrations
- Documentation: Improve guides and examples
- Testing: Expand test coverage and quality assurance
Please see our Contributing Guide for details on how to get started.
This project is licensed under the MIT License - see the LICENSE file for details.
- Language-of-Thoughts Research: Cognitive science foundations
- CrewAI: Multi-agent framework
- FastAPI: High-performance API framework
- spaCy: Advanced NLP processing
- Open Source Community: For inspiration and collaboration
- ๐ Website: promptly.ai (coming soon)
- ๐ง Email: hello@promptly.ai
- ๐ฌ Discord: Join our community
- ๐ฆ Twitter: @PromptlyAI
- ๐ Documentation: docs.promptly.ai
Promptly includes comprehensive testing infrastructure:
# Run all tests
python run_tests.py
# Run specific test categories
python run_tests.py unit
python run_tests.py integration
python run_tests.py manual
# Run quick verification
python run_tests.py quick
# Generate coverage report
python generate_coverage.py
Current Coverage: 22% (38 unit tests passing)
Component | Coverage | Status |
---|---|---|
Core LoT Processor | 99% | โ Excellent |
Core Models | 98% | โ Excellent |
Configuration | 90% | โ Good |
Core Utils | 71% | |
Agent System | 0% | โ Needs tests |
API Layer | 0% | โ Needs tests |
LLM Manager | 0% | โ Needs tests |
Workflows | 0% | โ Needs tests |
MCP Server | 0% | โ Needs tests |
Note: While overall coverage is 22%, the core LoT processing logic has 99% coverage. The low overall coverage is due to untested API layer and integration components, which are verified through manual testing.
Promptly - Because every great AI response starts with a great prompt ๐
Transform your vague ideas into precise instructions. Get better AI results. Save time. Achieve more.