Powered by Sinun AI (tm) Architecture
Stackpact combines cutting-edge AI technologies to deliver accurate assessments and actionable recommendations.
Custom RAG Architecture
Our platform utilizes Retrieval-Augmented Generation (RAG), an advanced AI technique that combines real-time information retrieval with large language model capabilities. This allows our system to access and synthesize the latest AI tools, techniques, and best practices when evaluating your team's skills.
Technical Detail
RAG enables context-aware responses by retrieving relevant knowledge from our curated database of AI development practices, then augmenting the LLM's response with this specific, up-to-date information.
State-of-the-Art LLM Integration
Stackpact is powered by the latest high-performance Large Language Models, including Claude by Anthropic. These models are specifically chosen for their superior reasoning capabilities, nuanced understanding of technical concepts, and ability to provide thoughtful, contextual feedback.
Why It Matters
Modern LLMs can understand the subtle differences between someone who uses AI as a crutch versus someone who leverages it strategically, enabling more accurate skill assessments.
Engineered System Prompts
Our assessment engine uses meticulously crafted system prompts developed through extensive research and iteration. These prompts guide the AI to conduct structured, consistent assessments while maintaining a natural, conversational experience.
The Difference
Unlike generic AI chatbots, our prompts are specifically designed for skills assessment, ensuring every conversation follows a validated methodology while adapting to individual responses.
How It Works Together
User Interaction
Team member engages with the assessment chatbot
RAG Knowledge Retrieval
System retrieves relevant AI tools, trends, and best practices
LLM Processing
Advanced AI analyzes responses using our custom system prompts
Score & Recommendations
Personalized AI Nativeness Score with actionable next steps
All processing happens in real-time, with your data encrypted and secure.
The AI Nativeness Score Explained
Our comprehensive 100-point scoring system evaluates four key dimensions of AI proficiency in software development.
Tool Awareness
Knowledge of available AI coding tools, their capabilities, and appropriate use cases. From GitHub Copilot to specialized AI IDEs.
Evaluated through: Tool familiarity questions, knowledge of recent releases
Daily Integration
How deeply AI tools are embedded into daily workflows. Measures practical adoption, not just awareness.
Evaluated through: Workflow descriptions, frequency of use, integration depth
Prompt Skills
Ability to communicate effectively with AI systems. Includes prompt engineering, context-setting, and iterative refinement.
Evaluated through: Practical prompt challenges, articulation quality
Critical Thinking
Ability to evaluate, verify, and improve AI-generated output. Understanding limitations and knowing when not to use AI.
Evaluated through: Quality assessment approaches, error handling strategies
Score Interpretation Guide
Beginner
Starting the AI journey
Developing
Building foundational skills
Proficient
Effective AI user
AI Champion
Leading AI adoption
Personalized Learning Paths
Every recommendation is tailored to your specific skill gaps, tech stack, and learning goals.
How We Generate Recommendations
Gap Analysis
Our AI identifies specific areas where your scores indicate room for improvement, cross-referencing with your role and responsibilities.
Tech Stack Context
Recommendations are filtered through your company's technology stack, ensuring suggested tools and practices are immediately applicable.
Current Trends Integration
Our RAG system ensures recommendations include the latest AI tools and techniques relevant to your development workflow.
Actionable Steps
Every recommendation includes concrete next steps you can take immediately, not just theoretical advice.
Skill Gap Identified
Prompt Engineering (scored 12/25)
Recommended Action
Practice structured prompting with the RISE framework (Role, Instructions, Steps, End goal) for your React development tasks.
Frequently Asked Questions
What is RAG and why does Stackpact use it?
RAG (Retrieval-Augmented Generation) is an AI architecture that enhances large language models by giving them access to external knowledge bases. Instead of relying solely on training data, RAG systems can retrieve and reference up-to-date, specific information.
Stackpact uses RAG to ensure our assessments and recommendations reflect the latest AI tools, techniques, and best practices in software development. This means when we mention a tool like "Claude Code" or "Cursor," we're drawing from current, accurate information rather than potentially outdated training data.
How accurate is the AI Nativeness Score?
The score is designed to be directionally accurate and actionable rather than a precise measurement. Our assessment methodology was developed by analyzing patterns from thousands of developer interactions with AI tools.
The score is most valuable for: (1) tracking individual progress over time, (2) identifying specific skill gaps, and (3) comparing relative strengths across the four dimensions. We recommend using it as a guide for development, not as an absolute measure of ability.
Which LLM does Stackpact use?
Stackpact primarily uses Claude by Anthropic, currently one of the highest-performing models for nuanced conversation and technical understanding. We specifically chose Claude for its strong reasoning capabilities and ability to maintain context across complex, multi-turn conversations.
For enterprise customers with BYOK (Bring Your Own Key) enabled, you can use your own Anthropic API key, giving you full control over your AI infrastructure and costs.
How does Stackpact ensure assessment quality and consistency?
We use carefully engineered system prompts that guide the AI through a structured assessment methodology. These prompts ensure:
- Consistent evaluation criteria across all assessments
- Natural, conversational flow that doesn't feel like a test
- Comprehensive coverage of all four scoring dimensions
- Adaptation to individual responses without losing assessment integrity
Our prompts are continuously refined based on assessment outcomes and user feedback to improve accuracy and relevance.
Is my assessment data used to train AI models?
No. Your assessment conversations are not used to train any AI models. We use Anthropic's API with data retention disabled, meaning your conversations are not stored by the AI provider after processing.
Your data is stored securely in our system solely for the purpose of providing your assessment results, tracking your progress, and generating personalized recommendations. Enterprise customers can also opt for self-hosted deployments for complete data control.
How often should team members retake the assessment?
We recommend quarterly assessments for most teams. This cadence allows enough time to implement recommendations and develop new skills while keeping pace with the rapidly evolving AI landscape.
However, teams actively focused on AI adoption initiatives may benefit from monthly check-ins, while stable teams might prefer bi-annual assessments. The platform tracks progress over time regardless of frequency.
Can Stackpact integrate with our existing tools?
Yes! Stackpact is designed for seamless integration into your existing workflow:
- Embeddable widget - Add to any internal portal with a single script tag
- SSO integration - Pass user IDs for personalized tracking
- Domain whitelisting - Control exactly where the widget can be embedded
- Custom configuration - Tailor assessments to your tech stack and policies
Does Stackpact have Microsoft Teams or Slack integration?
Yes! Stackpact offers native plugins for both Microsoft Teams and Slack. These integrations allow your team to:
- Take AI skills assessments directly within their communication platform
- Receive automated reports and progress updates in dedicated channels
- Use slash commands for quick assessments (e.g.,
/stackpact assess) - Set up scheduled reminders for periodic skill check-ins
The plugins ensure team members never need to leave the tools they already use daily, maximizing adoption and engagement.
Ready to assess your team's AI skills?
Experience our AI-powered assessment firsthand with a free demo.
14-day free trial. No credit card required.