Stop Guessing: 9 Prompt Engineering Frameworks That Actually Work
The Slot Machine Problem
You’ve been there. Staring at an empty chat window, cursor blinking like a disappointed parent, wondering how to phrase your question so the AI actually understands what you need. You type something. The response is… fine. But not quite right. So you rephrase. Clarify. Add context. Three messages later, you’re still not where you want to be, and your coffee has gone cold.
My cat, a British lilac with opinions about everything, watched me do this dance for months. She’d sit on my desk, judging silently as I rephrased the same prompt four different ways. Eventually, she started leaving the room when I opened ChatGPT. That’s when I knew something had to change.
Here’s the uncomfortable truth most people don’t want to hear: the majority of AI users treat conversations like a slot machine—pulling the lever and hoping for a jackpot. They type whatever comes to mind, hit enter, and pray. When the result disappoints, they blame the AI. “It doesn’t understand me,” they say. “It’s not smart enough.”
But prompt engineering isn’t gambling. It’s architecture. And like any architecture, it benefits from blueprints.
After hundreds of hours working with AI systems across software development, HR processes, and product launches, I’ve distilled everything into nine memorable frameworks. Each one is an acronym you can actually remember when you need it—not some academic taxonomy that requires a PhD to decode.
No more guessing. No more “let me rephrase that.” Just clear, structured communication that gets results on the first try.
This guide covers four distinct modes of AI interaction:
- ASK Mode — When you need information or analysis
- EDIT Mode — When you’re modifying existing work
- AGENT Mode — When AI works autonomously for you
- STRATEGIC Mode — When you’re planning or deciding
Each mode has its own mental model, its own pitfalls, and its own frameworks. Let’s start with the most common one.
Part 1: ASK Mode — Getting Answers That Actually Help
The most common AI interaction is simply asking questions. Yet this is where most people fail spectacularly. Why? Because they ask questions the way they’d ask a colleague who already knows their situation, their constraints, and their preferences.
That colleague has context. They know you work at a fintech startup, not a hospital. They know your deadline is Friday, not “whenever.” They know you’re technical, or you’re not. AI doesn’t have any of that context unless you provide it—every single time.
The SCOPE Framework — For Technical Questions
SCOPE is your go-to framework for specific, technical questions where you know what you want but need help getting there.
| Letter | Meaning | What to include |
|---|---|---|
| S | Situation | Your context, environment, and background |
| C | Constraint | Limitations, boundaries, what you can’t do |
| O | Outcome | What you want to walk away with |
| P | Precision | Format, detail level, structure of the answer |
| E | Expertise | How technical should the response be |
Think of SCOPE as setting the stage before the actor performs. Without it, you get a generic performance suitable for any audience. With it, you get exactly what your specific scene requires.
Real Example: HR Recruitment Scenario
Let’s see SCOPE in action. Imagine you’re an HR lead who needs interview questions for a senior developer position.
Without SCOPE (typical prompt):
“Help me write interview questions for a senior developer position.”
This might give you generic questions pulled from any “top interview questions” list on the internet. Useful if you want to sound like every other company. Not useful if you want to actually hire the right person.
With SCOPE:
Situation: I’m the HR lead at a 50-person fintech startup. We’re hiring a senior backend developer who’ll work on our payment processing system. Our tech stack is Python, PostgreSQL, and AWS. The team uses agile methodology with 2-week sprints.
Constraint: The interview is limited to 45 minutes. We can’t use live coding tests due to candidate feedback about stress. The role requires someone who can work independently—we don’t have dedicated tech leads for mentorship.
Outcome: A structured interview guide with questions that assess both technical depth and self-management ability.
Precision: I need 8-10 questions, grouped by competency area, with follow-up probes and red flags to watch for. Include a scoring rubric.
Expertise: I have a non-technical background, so include brief explanations of what good answers should contain.
See the difference? The second prompt accounts for your specific situation, constraints, and needs. The AI now understands you’re hiring for a fintech payment system, not a generic “developer.” It knows you can’t do live coding. It knows you need explanations because you’re not technical yourself.
The result is a tailored interview guide that you can actually use, not a list of questions you’ll have to filter through anyway.
When to Use SCOPE
SCOPE works best when you need:
- Technical documentation or explanations
- Specific recommendations tailored to your situation
- Detailed how-to guides
- Comparative analysis between options
- Any question where “it depends” is the likely answer without context
The PRISM Framework — For Analytical Questions
While SCOPE is perfect for “how do I do X?” questions, PRISM shines when you’re trying to understand or solve something more complex. It’s for problems, not tasks.
| Letter | Meaning | What to include |
|---|---|---|
| P | Problem | One-sentence definition of what you’re trying to solve |
| R | Requirements | What the solution must and must not do |
| I | Inputs | Data, resources, and information you have available |
| S | Success | How you’ll know if you’ve solved it |
| M | Metrics | Specific, measurable indicators of progress |
PRISM forces you to think like an analyst before you even start. This is invaluable because often, the act of filling out PRISM reveals that you don’t actually understand your own problem yet. I’ve abandoned prompts halfway through writing them because PRISM made me realize I was asking the wrong question entirely.
Real Example: Employee Engagement Analysis
Without PRISM:
“Our employee engagement survey results came back. Help me understand what’s going on.”
This prompt will get you generic advice about employee engagement. Useful for a blog post. Useless for your specific situation.
With PRISM:
Problem: Our quarterly engagement survey shows a 15-point drop in “management trust” scores, but exit interviews don’t mention management issues. There’s a disconnect I need to understand.
Requirements: I need to identify the root cause without launching a witch hunt or making managers defensive. The analysis should be actionable—not just “improve communication.”
Inputs: I have the raw survey data (anonymized), last 6 months of exit interview summaries, our org chart showing recent team restructures, and the dates of our return-to-office policy announcement.
Success: I’ll know I’ve solved this when I can present a specific hypothesis to leadership with supporting evidence and a concrete 90-day improvement plan.
Metrics: Re-survey in 90 days should show at least 8-point improvement. Manager 1:1 attendance should increase by 20%.
This prompt transforms a vague “help me understand” into a focused analytical exercise with clear success criteria. The AI can now help you form hypotheses, identify patterns in your data, and suggest specific interventions—because it knows what success looks like.
When to Use PRISM
PRISM is your go-to for:
- Diagnosing problems when the cause isn’t obvious
- Strategic analysis with multiple variables
- Situations where you have data but need insight
- Any problem where you need to define “done” before starting
flowchart TD
A[New Question] --> B{Do I know what I want?}
B -->|Yes, specific output| C[Use SCOPE]
B -->|No, need to figure it out| D[Use PRISM]
C --> E[Set the stage completely]
D --> F[Define success before starting]
E --> G[Get targeted response]
F --> G
ASK Mode Summary
| Framework | Best For | Key Insight |
|---|---|---|
| SCOPE | Specific, technical questions | Context is everything |
| PRISM | Complex analysis & problem-solving | Define success before starting |
Pro tip: If you’re unsure which to use, ask yourself: “Do I know what I want, or do I need help figuring that out?” If you know what you want → SCOPE. If you need to figure it out → PRISM.
Part 2: EDIT Mode — Precision Changes Without Breaking Things
Asking for information is one thing. Asking AI to modify existing work—code, documents, processes—is entirely different. The stakes are higher. A vague edit request can introduce bugs, break consistency, or change things you didn’t intend to change.
My cat once walked across my keyboard while I was asking Claude to “clean up this code.” The resulting prompt was gibberish, but honestly, it was about as precise as most edit requests I see. “Make it better” isn’t an instruction. It’s an abdication.
Edit Mode frameworks are about surgical precision. You’re not asking for a new creation; you’re asking for a specific transformation of something that already exists. And transformations require knowing both the “before” and the “after.”
The DELTA Framework — For Targeted Modifications
DELTA is what you reach for when you need to change something specific without disturbing everything around it.
| Letter | Meaning | What to include |
|---|---|---|
| D | Define | Exactly what you’re changing (scope boundary) |
| E | Existing | Current state—paste it or describe it precisely |
| L | Logic | Why you’re making this change |
| T | Target | What it should look like after |
| A | Assert | What must NOT change (invariants) |
The “A” in DELTA is the most overlooked element. When you tell AI what to change, you must also tell it what to preserve. Otherwise, it might “improve” things you didn’t ask it to touch. I’ve seen AI helpfully rename variables, reorganize functions, and add error handling—all technically improvements, all completely unwanted.
Real Example: Test Case Modification
You have an existing automated test suite for a login system. A new requirement adds two-factor authentication (2FA), and you need to update your tests.
Without DELTA:
“Update my login tests to include 2FA.”
This could result in anything from a complete rewrite to a minimal patch. You won’t know until you see the output—and then you’ll spend time figuring out what changed.
With DELTA:
Define: I’m updating the test file
login_tests.py, specifically thetest_successful_login()andtest_invalid_credentials()functions. Other test files and functions should not be touched.Existing:
def test_successful_login():
response = login(user=“test@example.com”, password=“valid123”) assert response.status == 200 assert response.session_token is not None
**Logic:** We're adding SMS-based 2FA. After entering valid credentials, users now receive a 6-digit code. The login isn't complete until this code is verified. **Target:** The test should: 1. Submit credentials and receive a "2FA_REQUIRED" status 2. Call the verify_2fa() endpoint with a mock code 3. Only then assert successful login **Assert:** - Keep the existing test naming convention - Don't modify the `login()` helper function itself - Preserve all existing assertions; add new ones - Don't touch `test_password_reset()` or other unrelated tests
The result is a precisely modified test that integrates 2FA without accidentally breaking anything else. No surprise refactoring. No helpful improvements you didn’t ask for.
When to Use DELTA
DELTA is essential when:
- Modifying code that’s part of a larger system
- Updating documentation while preserving structure
- Changing processes with dependencies
- Any edit where “don’t break other things” matters
The MORPH Framework — For Architectural Changes
Sometimes you’re not making a small edit—you’re transforming how something is organized or structured. That’s where MORPH comes in.
| Letter | Meaning | What to include |
|---|---|---|
| M | Module | Scope of change (file, component, system) |
| O | Original | Current architecture, pattern, or structure |
| R | Rationale | Business or technical reason for transformation |
| P | Pattern | Target architecture, pattern, or structure |
| H | Hazards | Risks, breaking changes, migration concerns |
MORPH is DELTA’s big brother. Use it when you’re not just editing—you’re restructuring.
Real Example: Test Architecture Transformation
Your QA team has 500+ test cases scattered across three monolithic files. It’s becoming unmaintainable. New developers can’t find relevant tests. Your CI pipeline runs everything even for small changes. Something has to give.
Without MORPH:
“Help me reorganize our test files.”
This is like telling an architect “make my house better.” Better how? For what purpose?
With MORPH:
Module: The entire
/testsdirectory, currently containing 3 monolithic files with 500+ test cases total.Original:
- All tests in
test_api.py,test_ui.py,test_integration.py- No clear organization by feature or user journey
- Shared fixtures scattered throughout
- Test data hardcoded in each test
Rationale:
- New developers can’t find relevant tests quickly
- CI pipeline runs everything even for small changes
- Duplicate setup code across files
- We’re adopting a microservices architecture and need tests aligned to services
Pattern:
/tests /unit /user_service /payment_service /notification_service /integration /user_journeys /api_contracts /e2e /fixtures /test_data conftest.pyHazards:
- 47 tests reference a shared database fixture that needs migration
- CI pipeline config must be updated
- Some tests have implicit ordering dependencies
- Team needs documentation for new structure
This gives AI the complete picture: where you are, where you’re going, why, and what might go wrong along the way. Now you can have a meaningful conversation about migration strategy, prioritization, and implementation steps.
When to Use MORPH
MORPH is your choice when:
- Reorganizing code architecture
- Migrating from one pattern to another
- Restructuring teams, processes, or systems
- Any change where “before” and “after” are fundamentally different
EDIT Mode Summary
| Framework | Best For | Key Insight |
|---|---|---|
| DELTA | Targeted, surgical edits | Always specify what NOT to change |
| MORPH | Structural transformations | Document hazards before you start |
Pro tip: If you can describe your change in one sentence, use DELTA. If you need a diagram to explain it, use MORPH.
Part 3: AGENT Mode — Delegating Complex Tasks
Here’s where things get interesting. Modern AI systems can do more than answer questions—they can execute multi-step tasks autonomously. They can browse, code, analyze, and create without you holding their hand through every step.
But autonomy without boundaries is chaos. It’s like hiring an intern and saying “do whatever seems right.” Maybe you get a brilliant initiative. More likely, you get a mess you’ll spend hours cleaning up.
Agent Mode frameworks are about delegation. You’re not asking for an answer; you’re assigning a mission. And like any good delegation, you need to specify the goal, the rules, and when to report back.
The ORBIT Framework — For Autonomous Task Execution
ORBIT is how you delegate focused tasks to AI without losing control.
| Letter | Meaning | What to include |
|---|---|---|
| O | Objective | The primary mission goal |
| R | Rules | Hard boundaries—what to NEVER do |
| B | Behavior | Preferred approaches and patterns |
| I | Iteration | How and when to report progress |
| T | Termination | When to stop (success and failure criteria) |
The “R” in ORBIT is critical. When you give AI autonomy, you must define the electric fence. What actions are absolutely off-limits, no matter how helpful they might seem?
Real Example: Automated Code Review Agent
You want AI to review pull requests in your repository automatically. Without proper boundaries, it might approve dangerous changes, merge without human oversight, or comment on files outside the scope of the PR.
Without ORBIT:
“Review this pull request and suggest improvements.”
This is a recipe for inconsistent results. Sometimes you’ll get a thorough review, sometimes a surface-level scan, sometimes an overly critical nitpick of every semicolon.
With ORBIT:
Objective: Review pull request #234 for code quality, security issues, and adherence to our coding standards. Produce a review report that a human reviewer can verify quickly.
Rules:
- NEVER approve or merge the PR—only analyze and report
- NEVER modify any code directly
- NEVER comment on files outside the PR’s changeset
- Don’t flag style issues already covered by our linter
Behavior:
- Start with security-critical issues, then correctness, then style
- Reference our coding guidelines document when citing violations
- Provide specific line numbers, not vague descriptions
- Suggest fixes, don’t just identify problems
- Group related issues together
Iteration:
- Report critical security issues immediately, don’t wait for full review
- For PRs over 500 lines, provide a progress update at 50%
- If you encounter unfamiliar frameworks or patterns, flag for human review
Termination:
- Success: Complete report covering all changed files with severity ratings
- Failure: Stop and escalate if PR contains obfuscated code, credentials, or references external systems you can’t analyze
This prompt transforms a simple review request into a well-defined mission with clear boundaries and escalation paths. The AI knows when to proceed, when to pause, and when to stop entirely.
When to Use ORBIT
ORBIT excels when:
- Delegating repetitive but important tasks
- Setting up AI workflows that run regularly
- Any autonomous task where mistakes have consequences
- Building AI-powered tools or automations
The PILOT Framework — For Multi-Step Complex Workflows
While ORBIT is perfect for focused autonomous tasks, PILOT handles complex, multi-phase projects where different tools and resources come into play.
| Letter | Meaning | What to include |
|---|---|---|
| P | Plan | High-level steps to reach the goal |
| I | Interface | What tools, APIs, and resources are available |
| L | Limits | Resource constraints (time, budget, API calls) |
| O | Output | Expected deliverables at each stage |
| T | Triggers | When to pause and escalate to human |
PILOT is like giving a junior team member their first solo project. You outline the plan, show them the tools, set the guardrails, and tell them when to come back to you.
Real Example: Mobile App Feature Development
You’re using AI to help develop a new feature for your mobile app—a dark mode toggle. This isn’t a single task; it’s a multi-step project with dependencies and deliverables.
Without PILOT:
“Help me add dark mode to my app.”
This prompt could lead anywhere. The AI might ask clarifying questions, jump straight into code, or provide a high-level overview. You’ve lost control of the process.
With PILOT:
Plan:
- Audit current UI components for hardcoded colors
- Create a theming system with light/dark variants
- Implement a toggle component with system preference detection
- Update all screens to use the theming system
- Add persistence so the setting survives app restarts
- Create unit tests for theme switching logic
Interface:
- Codebase: React Native project in
/mobile-app- Design system: Figma file linked in README (read-only)
- APIs: No backend changes needed; theme stored locally
- Tools: Allowed to read files, create files, modify existing files
Limits:
- Don’t exceed 20 new files
- Each component modification should be a separate commit
- Don’t refactor unrelated code even if you spot issues
- Total task budget: 2 hours of active work
Output:
- After step 1: Report listing all files with hardcoded colors
- After step 3: Working toggle component for human review
- After step 6: Full PR with all changes, tests, and documentation
Triggers:
- Pause if any component uses a color system I don’t recognize
- Pause if implementing the toggle requires navigation changes
- Stop and report if more than 50 files need modification
- Escalate any changes to authentication or payment screens
This PILOT prompt gives the AI a complete project brief. It knows the plan, the tools, the constraints, what to deliver, and when to ask for help. You’ve transformed vague assistance into structured collaboration.
When to Use PILOT
PILOT is ideal for:
- Feature development with multiple phases
- Any project that would normally need a brief
- Workflows involving multiple tools or systems
- Delegating work that spans hours or days
flowchart TD
A[Delegation Task] --> B{Single focused task?}
B -->|Yes| C[Use ORBIT]
B -->|No, multiple phases| D[Use PILOT]
C --> E[Define boundaries + termination]
D --> F[Define plan + triggers]
E --> G[Controlled autonomy]
F --> G
AGENT Mode Summary
| Framework | Best For | Key Insight |
|---|---|---|
| ORBIT | Focused autonomous tasks | Define the electric fence clearly |
| PILOT | Multi-step complex projects | Specify triggers for human escalation |
Pro tip: Always test agent prompts with low-stakes tasks first. Once you trust the boundaries, gradually increase autonomy.
Part 4: STRATEGIC Mode — Planning Projects and Making Decisions
The frameworks so far have been tactical—getting things done. But some of the most valuable AI conversations happen at the strategic level: planning new projects, evaluating options, and making decisions.
Strategic Mode frameworks help you think, not just do. They’re particularly powerful for greenfield projects where you’re starting from zero and every decision compounds.
My cat, incidentally, is excellent at strategic thinking. She evaluates every lap for comfort potential, every window for bird-watching quality, every keyboard for optimal napping warmth. She never just picks the first option. Neither should you.
The VECTOR Framework — For Project Kickoff
VECTOR is your greenfield project checklist. It ensures you’ve thought through the full picture before writing a single line of code.
| Letter | Meaning | What to include |
|---|---|---|
| V | Vision | The ultimate goal and success picture |
| E | Ecosystem | Tech stack, integrations, dependencies |
| C | Customers | Who benefits and their specific needs |
| T | Timeline | Milestones, deadlines, phases |
| O | Ownership | Decision rights and responsibilities |
| R | Risks | What could go wrong and mitigation plans |
Real Example: HR Platform Greenfield Project
Your company decided to build an internal HR platform instead of buying one. You’re leading the project kickoff. Without proper framing, you’ll spend weeks in alignment meetings.
Without VECTOR:
“Help me plan our new HR platform project.”
This prompt will get you generic project planning advice. Templates. Best practices. Nothing specific to your situation.
With VECTOR:
Vision: A unified HR platform that handles recruitment, onboarding, performance reviews, and PTO management. Success means eliminating our current 5-tool mess and reducing HR admin time by 40%.
Ecosystem:
- Frontend: React with TypeScript
- Backend: Node.js microservices
- Database: PostgreSQL for structured data, S3 for documents
- Integrations: SSO with Okta, calendar sync with Google/Outlook, Slack notifications
- Hosting: AWS with Terraform IaC
- Must comply with GDPR and SOC2
Customers:
- HR Team (5 people): Need dashboard for all employee actions, bulk operations, reporting
- Managers (30): Need simple approval workflows, team views, no training required
- Employees (200): Need self-service for PTO, personal info updates, document access
- Finance: Need export capabilities for payroll integration
Timeline:
- Month 1-2: Core infrastructure, authentication, employee directory
- Month 3-4: PTO module + manager workflows
- Month 5-6: Recruitment module with external job posting
- Month 7-8: Performance review system
- Month 9: Polish, migration, training
- Month 10: Go-live with parallel running
Ownership:
- Product decisions: HR Director (Sarah)
- Technical architecture: Lead Developer (me)
- UX/Design: External contractor, needs 2-week review cycles
- Security review: CTO approval required before any external integrations
Risks:
- Data migration: 5 years of data in 5 different formats—need migration strategy
- Adoption: Managers resistant to new tools; need change management plan
- Scope creep: “One more feature” syndrome—strict MVP definition needed
- Key person dependency: Only one developer knows our Okta setup
- Compliance: GDPR audit before storing any EU employee data
Now you can have meaningful conversations about any aspect of this project. Want to dive into the migration strategy? The AI has context. Need to brainstorm adoption tactics? It knows your stakeholders. Want to challenge the timeline? It understands the dependencies.
When to Use VECTOR
VECTOR is essential for:
- New project kickoffs
- Bringing stakeholders onto the same page
- Creating project briefs or PRDs
- Any situation where you need “the full picture”
The ARCH Framework — For Technical Decisions
Big decisions need structured thinking. ARCH helps you evaluate architectural and technical choices rigorously.
| Letter | Meaning | What to include |
|---|---|---|
| A | Assumptions | What you’re taking as given (that might be wrong) |
| R | Requirements | Non-negotiable criteria the solution must meet |
| C | Choices | Options you’re considering |
| H | Hypothesis | Your current leaning and why |
ARCH forces intellectual honesty. By explicitly stating assumptions, you open them to challenge. By listing all choices, you avoid premature commitment.
Real Example: Testing Infrastructure Decision
Your team needs to decide on an end-to-end testing framework for a new web application. Everyone has opinions. Cypress vs. Playwright debates have broken out in Slack.
Without ARCH:
“Should we use Cypress or Playwright for our e2e tests?”
This frames the decision as binary when there might be other options. It also doesn’t give AI the context to give you a meaningful recommendation.
With ARCH:
Assumptions:
- Our developers will write and maintain tests, not a separate QA team
- We’ll have 200-500 e2e tests within the first year
- Our CI/CD pipeline uses GitHub Actions with limited parallel runners
- Test execution time matters—we want feedback in under 10 minutes
- We might need mobile browser testing eventually
Requirements:
- Must support Chrome, Firefox, and Safari
- Must integrate with our GitHub Actions CI pipeline
- Must have visual debugging capabilities for flaky test diagnosis
- Must support component testing (not just full e2e)
- Team ramp-up time under 1 week
- Must handle authentication flows without hacks
Choices:
- Cypress — Established, excellent DX, huge ecosystem, but limited cross-browser
- Playwright — Microsoft-backed, true cross-browser, newer ecosystem
- Selenium — Industry standard, any browser, but verbose and slower
- TestCafe — No WebDriver needed, good cross-browser, smaller community
Hypothesis: I’m leaning toward Playwright because:
- True cross-browser support without plugins
- Built-in parallelization fits our CI constraints
- The API is similar enough to Cypress that ramp-up should be fast
- Microsoft’s backing suggests long-term maintenance
However, I’m uncertain about the ecosystem maturity and whether our team’s Cypress experience transfers well.
This ARCH prompt invites genuine analysis. You’re not asking AI to confirm your choice—you’re asking it to stress-test your thinking. Challenge the assumptions. Question the requirements. Propose alternatives you haven’t considered.
When to Use ARCH
ARCH is perfect for:
- Technology selection decisions
- Architecture design reviews
- Build vs. buy evaluations
- Any decision you’ll need to justify later
The NORTH Framework — For Strategic Planning
NORTH is your strategic planning compass, similar to how PASTOR works in marketing. It keeps you oriented toward what matters.
| Letter | Meaning | What to include |
|---|---|---|
| N | Need | Market need or problem you’re solving |
| O | Opportunity | Why now, why you, market sizing |
| R | Resources | What you have available (skills, capital, time) |
| T | Thesis | Your core hypothesis or bet |
| H | Horizon | Short, medium, and long-term milestones |
NORTH is strategic, not tactical. It’s about direction, not steps.
Real Example: Test Automation Platform Strategy
You’re considering building a test automation platform as a SaaS product. It’s a big bet. You need to think it through properly before committing.
Without NORTH:
“Is building a test automation SaaS a good idea?”
This invites opinion, not analysis. You’ll get generic pros and cons that apply to any SaaS business.
With NORTH:
Need: Mid-size development teams (20-100 engineers) struggle with test automation. They’ve outgrown manual testing but can’t justify dedicated QA engineers. Existing tools are either too complex (enterprise) or too limited (startup-focused).
Opportunity:
- Market timing: AI is making intelligent test generation possible for the first time
- Market gap: No player owns the “mid-market” segment specifically
- TAM: $15B test automation market, growing 15% annually
- Why me: 10+ years in test automation, built internal tools at three companies, know the pain points intimately
Resources:
- Skills: Deep test automation expertise, full-stack development capability
- Capital: €50K personal runway, no external funding yet
- Time: Can dedicate 30 hours/week while consulting part-time
- Network: Connections at 20+ potential design partner companies
Thesis: Teams don’t need another testing framework—they need a testing co-pilot. An AI-powered system that writes tests from user stories, maintains them when code changes, and explains failures in plain English will capture the mid-market by eliminating the QA expertise barrier.
Horizon:
- 3 months: MVP with AI test generation from user stories, 5 design partners
- 6 months: Beta launch, 20 paying customers, €5K MRR
- 12 months: Self-serve product, 100 customers, €30K MRR
- 24 months: Series A or profitable at €100K MRR
NORTH keeps strategic conversations grounded. Instead of abstract “should I?” discussions, you’re evaluating a specific hypothesis with clear milestones. The AI can now help you pressure-test your assumptions, identify gaps in your thinking, and suggest alternatives.
When to Use NORTH
NORTH guides:
- New product or business evaluation
- Strategic pivots and direction changes
- Investment or resource allocation decisions
- Annual or quarterly planning exercises
STRATEGIC Mode Summary
| Framework | Best For | Key Insight |
|---|---|---|
| VECTOR | Project kickoffs | Cover all six dimensions |
| ARCH | Technical decisions | State assumptions explicitly |
| NORTH | Strategic planning | Define horizons with specifics |
Pro tip: Use VECTOR for “what are we building?” Use ARCH for “how should we build it?” Use NORTH for “should we build it at all?”
Generative Engine Optimization
Here’s something most prompt engineering guides miss entirely: the quality of your prompt directly affects how AI systems will represent your work when others ask about it.
As AI increasingly mediates information discovery—through search, assistants, and generated summaries—the clarity of your source material matters more than ever. When your project documentation is structured with clear frameworks, when your decisions are documented with explicit assumptions and rationales, AI systems can represent your work accurately to others who inquire about it.
This is Generative Engine Optimization in practice. It’s not about gaming algorithms. It’s about creating content and documentation that AI can faithfully represent. If your project brief uses VECTOR, an AI assistant can accurately summarize your vision, ecosystem, and risks to a new team member. If your technical decision uses ARCH, future AI queries about “why did we choose Playwright?” get accurate, nuanced answers.
The frameworks in this guide aren’t just for your immediate AI interactions. They’re a documentation strategy. Every well-structured prompt becomes a retrievable artifact. Every NORTH-documented strategy becomes searchable institutional knowledge.
Think of it this way: you’re not just prompting AI, you’re training future AI interactions with your work.
How We Evaluated These Frameworks
A brief note on methodology, because frameworks without validation are just opinions.
These nine frameworks emerged from a specific process. First, I catalogued over 300 of my own AI interactions over six months, noting which produced useful outputs on the first try and which required multiple refinements. Second, I identified patterns in the successful prompts—what context did they include that the unsuccessful ones didn’t?
Third, I abstracted these patterns into memorable acronyms. Memorability matters. A framework you can’t recall in the moment you need it is worthless. Every framework here has survived the “can I remember this at 11 PM when I’m tired and frustrated?” test.
Fourth, I tested each framework across multiple domains: HR, software testing, app development, and strategic planning. Frameworks that only worked in one domain were either refined or discarded.
Finally, I validated with others. Colleagues, clients, and workshop participants used these frameworks for their own work. Their feedback shaped the final versions you see here.
This isn’t academic research. It’s practitioner knowledge, refined through repetition and feedback.
The Complete Framework Reference
| Mode | Framework | Best For | Key Element |
|---|---|---|---|
| ASK | SCOPE | Technical questions | Set full context |
| ASK | PRISM | Analytical problems | Define success criteria |
| EDIT | DELTA | Targeted modifications | Specify invariants |
| EDIT | MORPH | Structural changes | Document hazards |
| AGENT | ORBIT | Autonomous tasks | Define boundaries |
| AGENT | PILOT | Complex workflows | Set escalation triggers |
| STRATEGIC | VECTOR | Project kickoffs | Cover all dimensions |
| STRATEGIC | ARCH | Technical decisions | State assumptions |
| STRATEGIC | NORTH | Strategic planning | Define horizons |
From Frameworks to Fluency
Here’s what separates prompt engineering novices from experts: novices memorize prompts, experts internalize patterns.
These nine frameworks aren’t meant to be filled out robotically every time you open an AI chat. They’re mental models. Once you’ve used SCOPE a dozen times, you’ll automatically think in terms of Situation, Constraints, Outcome, Precision, and Expertise—even for quick questions. The letters become a checklist that runs in the background.
The goal is unconscious competence. You want “setting context” to feel as natural as “saying hello.”
Start here:
- Pick ONE framework from each mode that resonates most
- Use it deliberately for your next 10 relevant interactions
- Notice how the quality of responses changes
- Gradually incorporate the others
The investment pays off quickly. Better prompts mean fewer clarification rounds, less frustration, and dramatically better outputs. You’ll wonder how you ever worked without them.
My cat, for the record, now stays on my desk when I use Claude. She’s noticed I close the laptop with less sighing these days. That’s the highest endorsement I can offer.
Take Your Team’s AI Skills to the Next Level
These frameworks are just the beginning. In my hands-on workshops, teams learn to:
- Apply these frameworks to their specific domains and workflows
- Build custom prompt libraries for recurring tasks
- Design AI-powered automations with proper guardrails
- Evaluate when AI is (and isn’t) the right tool
- Create institutional knowledge that AI can accurately represent
I work with development teams, HR departments, and cross-functional groups who want to move beyond “ChatGPT for email” to genuine AI-augmented productivity.
Interested in bringing this training to your organization?
The difference between teams that struggle with AI and teams that thrive isn’t intelligence—it’s structure. These frameworks provide that structure. A half-day workshop can transform how your team works with AI tools.
→ Contact me about workshops and training
Or connect with me on LinkedIn where I share prompt engineering insights weekly.
Jakub Jirák is a CTO and automation architect with 10+ years of experience building AI-augmented development workflows. He writes about practical AI applications at thinkdifferent.blog.


























