diff --git a/README.md b/README.md index 2f47179..7a1f88d 100644 --- a/README.md +++ b/README.md @@ -72,48 +72,111 @@ This is a tutorial project of [Pocket Flow](https://github.com/The-Pocket/Pocket ## šŸš€ Getting Started -1. Clone this repository +1. **Clone this repository** + ```bash + git clone https://github.com/The-Pocket/Tutorial-Codebase-Knowledge.git + cd Tutorial-Codebase-Knowledge + ``` -2. Install dependencies: +2. **Set up a virtual environment (recommended)** ```bash - pip install -r requirements.txt + python -m venv venv + source venv/bin/activate # On Windows: venv\Scripts\activate ``` -3. Set up LLM in [`utils/call_llm.py`](./utils/call_llm.py) by providing credentials. By default, you can use the AI Studio key with this client for Gemini Pro 2.5: +3. **Install dependencies** + ```bash + pip install -r requirements.txt + ``` - ```python - client = genai.Client( - api_key=os.getenv("GEMINI_API_KEY", "your-api_key"), - ) +4. **Configure LLM access** + + The tool supports multiple LLM providers. Configure at least one: + + - **Google Gemini (default)**: + ```bash + # For Vertex AI: + export GEMINI_PROJECT_ID="your-project-id" + export GEMINI_LOCATION="us-central1" + # OR for AI Studio: + export GEMINI_API_KEY="your-api-key" + ``` + + - **Anthropic Claude**: + ```bash + export ANTHROPIC_API_KEY="your-api-key" + # Uncomment Claude function in utils/call_llm.py + ``` + + - **OpenAI**: + ```bash + export OPENAI_API_KEY="your-api-key" + # Uncomment OpenAI function in utils/call_llm.py + ``` + +5. **Set up GitHub token (recommended)** + ```bash + export GITHUB_TOKEN="your-github-token" ``` - You can use your own models. We highly recommend the latest models with thinking capabilities (Claude 3.7 with thinking, O1). You can verify that it is correctly set up by running: +6. **Verify your setup** ```bash python utils/call_llm.py ``` -4. Generate a complete codebase tutorial by running the main script: - ```bash - # Analyze a GitHub repository - python main.py --repo https://github.com/username/repo --include "*.py" "*.js" --exclude "tests/*" --max-size 50000 +7. **Generate a tutorial** + ```bash + # From a GitHub repository + python main.py --repo https://github.com/username/repo --include "*.py" "*.js" + + # Or from a local directory + python main.py --dir /path/to/your/codebase --include "*.py" + ``` - # Or, analyze a local directory - python main.py --dir /path/to/your/codebase --include "*.py" --exclude "*test*" +For detailed setup instructions, see [SETUP.md](./SETUP.md). - # Or, generate a tutorial in Chinese - python main.py --repo https://github.com/username/repo --language "Chinese" - ``` +## šŸš€ How to Run This Project - - `--repo` or `--dir` - Specify either a GitHub repo URL or a local directory path (required, mutually exclusive) - - `-n, --name` - Project name (optional, derived from URL/directory if omitted) - - `-t, --token` - GitHub token (or set GITHUB_TOKEN environment variable) - - `-o, --output` - Output directory (default: ./output) - - `-i, --include` - Files to include (e.g., "*.py" "*.js") - - `-e, --exclude` - Files to exclude (e.g., "tests/*" "docs/*") - - `-s, --max-size` - Maximum file size in bytes (default: 100KB) - - `--language` - Language for the generated tutorial (default: "english") +1. **Set up environment variables** (choose one option): + + Option 1: For Google Gemini (default): + ```bash + export GEMINI_PROJECT_ID="your-project-id" + export GEMINI_LOCATION="us-central1" + # OR for AI Studio instead of Vertex AI: + export GEMINI_API_KEY="your-api-key" + ``` + + Option 2: For Anthropic Claude (uncomment in call_llm.py): + ```bash + export ANTHROPIC_API_KEY="your-api-key" + ``` + + Option 3: For OpenAI O1 (uncomment in call_llm.py): + ```bash + export OPENAI_API_KEY="your-api-key" + ``` -The application will crawl the repository, analyze the codebase structure, generate tutorial content in the specified language, and save the output in the specified directory (default: ./output). +2. **Test LLM connection**: + ```bash + python utils/call_llm.py + ``` + +3. **Generate a tutorial from a GitHub repository**: + ```bash + python main.py --repo https://github.com/username/repo --include "*.py" + ``` + +4. **Or analyze a local codebase**: + ```bash + python main.py --dir /path/to/your/code --include "*.py" "*.js" + ``` + +5. **Check the generated output**: + ```bash + cd output + # View the generated tutorial files + ``` ## šŸ’” Development Tutorial diff --git a/SETUP.md b/SETUP.md new file mode 100644 index 0000000..4368ea7 --- /dev/null +++ b/SETUP.md @@ -0,0 +1,150 @@ +# Detailed Setup Guide for AI Codebase Knowledge Builder + +This guide provides comprehensive instructions for setting up and configuring the AI Codebase Knowledge Builder tool. + +## Prerequisites + +- Python 3.8 or newer +- Git (for cloning repositories) +- Access to at least one of the supported LLM providers: + - Google Gemini (default) + - Anthropic Claude (optional) + - OpenAI (optional) + +## Step 1: Clone the Repository + +```bash +git clone https://github.com/The-Pocket/Tutorial-Codebase-Knowledge.git +cd Tutorial-Codebase-Knowledge +``` + +## Step 2: Create and Activate a Virtual Environment (Recommended) + +### For Linux/macOS +```bash +python -m venv venv +source venv/bin/activate +``` + +### For Windows +```bash +python -m venv venv +venv\Scripts\activate +``` + +## Step 3: Install Dependencies + +```bash +pip install -r requirements.txt +``` + +## Step 4: Configure LLM Access + +You need to set up access to at least one Language Model provider. The project uses Google Gemini by default but supports others. + +### Option 1: Google Gemini (Default) + +Choose one of these methods: + +#### Using Vertex AI +1. Create a Google Cloud project and enable Vertex AI +2. Set environment variables: + ```bash + export GEMINI_PROJECT_ID="your-project-id" + export GEMINI_LOCATION="us-central1" # Or your preferred region + ``` + For Windows: + ``` + set GEMINI_PROJECT_ID=your-project-id + set GEMINI_LOCATION=us-central1 + ``` + +#### Using AI Studio +1. Get an API key from [Google AI Studio](https://makersuite.google.com/app/apikey) +2. Set the API key as an environment variable: + ```bash + export GEMINI_API_KEY="your-api-key" + ``` + For Windows: + ``` + set GEMINI_API_KEY=your-api-key + ``` + +### Option 2: Anthropic Claude + +1. Get an API key from [Anthropic](https://console.anthropic.com/) +2. Set the API key: + ```bash + export ANTHROPIC_API_KEY="your-api-key" + ``` + For Windows: + ``` + set ANTHROPIC_API_KEY=your-api-key + ``` +3. Edit `utils/call_llm.py` to uncomment the Claude implementation and comment out other implementations + +### Option 3: OpenAI + +1. Get an API key from [OpenAI](https://platform.openai.com/) +2. Set the API key: + ```bash + export OPENAI_API_KEY="your-api-key" + ``` + For Windows: + ``` + set OPENAI_API_KEY=your-api-key + ``` +3. Edit `utils/call_llm.py` to uncomment the OpenAI implementation and comment out other implementations + +## Step 5: GitHub Token (Optional but Recommended) + +For accessing GitHub repositories, especially private ones or to avoid rate limits: + +1. Generate a GitHub token at [GitHub Settings](https://github.com/settings/tokens) + - For public repositories: Select `public_repo` scope + - For private repositories: Select `repo` scope +2. Set the token: + ```bash + export GITHUB_TOKEN="your-github-token" + ``` + For Windows: + ``` + set GITHUB_TOKEN=your-github-token + ``` + +## Step 6: Verify Setup + +Test your LLM configuration: + +```bash +python utils/call_llm.py +``` + +You should see a response from the configured LLM provider. + +## Troubleshooting + +### LLM Connection Issues +- **Error**: "Failed to connect to LLM API" + - Check your API keys and environment variables + - Verify network connection + - Ensure the correct model name is specified + +### GitHub Access Issues +- **Error**: "Repository not found" + - Check if the repository exists and is accessible + - Verify GitHub token permissions + - For private repositories, ensure your token has the `repo` scope + +### File Size Limitations +- **Error**: "Skipping file: size exceeds limit" + - Increase the `--max-size` parameter for larger files + - Or exclude large files using the `--exclude` parameter + +## Additional Configuration + +- Create a `.env` file in the project root to store environment variables permanently +- Customize logging by modifying the `LOG_DIR` environment variable +- Adjust caching behavior by editing the cache settings in `utils/call_llm.py` + +For more information, refer to the main [README.md](./README.md). diff --git a/main.py b/main.py index e0ccc82..b2a2cb8 100644 --- a/main.py +++ b/main.py @@ -1,9 +1,13 @@ import dotenv import os import argparse +import sys +import textwrap # Import the function that creates the flow from flow import create_tutorial_flow +from utils.call_llm import call_llm +# Load environment variables from .env file if present dotenv.load_dotenv() # Default file patterns @@ -19,9 +23,39 @@ "legacy/*", ".git/*", ".github/*", ".next/*", ".vscode/*", "obj/*", "bin/*", "node_modules/*", "*.log" } +# Validate setup function +def validate_setup(): + # Test LLM configuration + try: + print("Validating LLM setup...") + call_llm("Hello, testing LLM connection.", use_cache=False) + print("āœ… LLM connection successful!") + except Exception as e: + print(f"\nāŒ LLM configuration error: {str(e)}") + print("\nPlease check your LLM setup. See SETUP.md for detailed instructions.") + return False + return True + # --- Main Function --- def main(): - parser = argparse.ArgumentParser(description="Generate a tutorial for a GitHub codebase or local directory.") + parser = argparse.ArgumentParser( + description="Generate a tutorial for a GitHub codebase or local directory.", + formatter_class=argparse.RawDescriptionHelpFormatter, + epilog=textwrap.dedent(''' + Example commands: + ----------------- + # Generate tutorial from GitHub repo: + python main.py --repo https://github.com/username/repo --include "*.py" "*.js" --exclude "tests/*" + + # Generate tutorial from local directory: + python main.py --dir ./my-project --include "*.py" --max-size 200000 + + # Generate tutorial in Chinese: + python main.py --repo https://github.com/username/repo --language "Chinese" + + For detailed setup instructions, see SETUP.md + ''') + ) # Create mutually exclusive group for source source_group = parser.add_mutually_exclusive_group(required=True) @@ -34,17 +68,32 @@ def main(): parser.add_argument("-i", "--include", nargs="+", help="Include file patterns (e.g. '*.py' '*.js'). Defaults to common code files if not specified.") parser.add_argument("-e", "--exclude", nargs="+", help="Exclude file patterns (e.g. 'tests/*' 'docs/*'). Defaults to test/build directories if not specified.") parser.add_argument("-s", "--max-size", type=int, default=100000, help="Maximum file size in bytes (default: 100000, about 100KB).") - # Add language parameter for multi-language support parser.add_argument("--language", default="english", help="Language for the generated tutorial (default: english)") + parser.add_argument("--skip-validation", action="store_true", help="Skip validation of LLM setup") + parser.add_argument("--verbose", "-v", action="store_true", help="Enable verbose output") args = parser.parse_args() + # Set verbose mode if requested + if args.verbose: + import logging + logging.basicConfig(level=logging.INFO) + + # Validate environment and setup unless skipped + if not args.skip_validation: + if not validate_setup(): + sys.exit(1) + # Get GitHub token from argument or environment variable if using repo github_token = None if args.repo: github_token = args.token or os.environ.get('GITHUB_TOKEN') if not github_token: print("Warning: No GitHub token provided. You might hit rate limits for public repositories.") + print("For better experience, set the GITHUB_TOKEN environment variable or use the --token option.") + + # Create output directory if it doesn't exist + os.makedirs(args.output, exist_ok=True) # Initialize the shared dictionary with inputs shared = { @@ -72,13 +121,37 @@ def main(): } # Display starting message with repository/directory and language - print(f"Starting tutorial generation for: {args.repo or args.dir} in {args.language.capitalize()} language") - - # Create the flow instance - tutorial_flow = create_tutorial_flow() - - # Run the flow - tutorial_flow.run(shared) + source_info = args.repo if args.repo else args.dir + print(f"Starting tutorial generation for: {source_info}") + print(f"Language: {args.language.capitalize()}") + print(f"Output directory: {os.path.abspath(args.output)}") + print(f"Maximum file size: {args.max_size} bytes") + print(f"Include patterns: {shared['include_patterns']}") + print(f"Exclude patterns: {shared['exclude_patterns']}") + + try: + # Create the flow instance + tutorial_flow = create_tutorial_flow() + + # Run the flow + tutorial_flow.run(shared) + + # Show final success message with output location + if shared.get("final_output_dir"): + print(f"\nāœ… Tutorial generation complete!") + print(f"Output directory: {os.path.abspath(shared['final_output_dir'])}") + print(f"Main tutorial index: {os.path.join(os.path.abspath(shared['final_output_dir']), 'index.md')}") + except KeyboardInterrupt: + print("\n\nProcess interrupted by user. Exiting...") + sys.exit(1) + except Exception as e: + print(f"\nāŒ Error generating tutorial: {str(e)}") + if args.verbose: + import traceback + traceback.print_exc() + else: + print("Run with --verbose for more details") + sys.exit(1) if __name__ == "__main__": main() diff --git a/requirements.txt b/requirements.txt index 06253bc..6fd248c 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,7 +1,12 @@ -pocketflow>=0.0.1 +# Core dependencies pyyaml>=6.0 -requests>=2.28.0 -gitpython>=3.1.0 -google-cloud-aiplatform>=1.25.0 -google-genai>=1.9.0 python-dotenv>=1.0.0 +gitpython>=3.1.0 + +# LLM providers - uncomment the ones you need +google-generativeai>=0.3.0 +# openai>=1.3.0 +# anthropic>=0.8.0 + +# HTTP requests +requests>=2.28.0 diff --git a/utils/call_llm.py b/utils/call_llm.py index 0d794b4..39d72f5 100644 --- a/utils/call_llm.py +++ b/utils/call_llm.py @@ -2,6 +2,7 @@ import os import logging import json +import sys from datetime import datetime # Configure logging @@ -17,11 +18,26 @@ file_handler.setFormatter(logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')) logger.addHandler(file_handler) +# Add console handler for errors +console_handler = logging.StreamHandler(sys.stderr) +console_handler.setLevel(logging.ERROR) +console_handler.setFormatter(logging.Formatter('%(levelname)s: %(message)s')) +logger.addHandler(console_handler) + # Simple cache configuration -cache_file = "llm_cache.json" +cache_file = os.getenv("CACHE_FILE", "llm_cache.json") +use_cache_default = os.getenv("USE_CACHE", "true").lower() == "true" + +# Validate environment variables function +def validate_env_for_provider(provider_name, required_vars): + missing = [var for var in required_vars if not os.getenv(var)] + if missing: + logger.error(f"{provider_name} configuration incomplete. Missing: {', '.join(missing)}") + return False + return True -# By default, we Google Gemini 2.5 pro, as it shows great performance for code understanding -def call_llm(prompt: str, use_cache: bool = True) -> str: +# By default, we use Google Gemini 2.5 pro, as it shows great performance for code understanding +def call_llm(prompt: str, use_cache: bool = use_cache_default) -> str: # Log the prompt logger.info(f"PROMPT: {prompt}") @@ -33,93 +49,229 @@ def call_llm(prompt: str, use_cache: bool = True) -> str: try: with open(cache_file, 'r') as f: cache = json.load(f) - except: - logger.warning(f"Failed to load cache, starting with empty cache") + except Exception as e: + logger.warning(f"Failed to load cache: {e}. Starting with empty cache") # Return from cache if exists if prompt in cache: - logger.info(f"RESPONSE: {cache[prompt]}") + logger.info(f"CACHE HIT: Using cached response") return cache[prompt] - # Call the LLM if not in cache or cache disabled - client = genai.Client( - vertexai=True, - # TODO: change to your own project id and location - project=os.getenv("GEMINI_PROJECT_ID", "your-project-id"), - location=os.getenv("GEMINI_LOCATION", "us-central1") - ) - # You can comment the previous line and use the AI Studio key instead: - # client = genai.Client( - # api_key=os.getenv("GEMINI_API_KEY", "your-api_key"), - # ) - model = os.getenv("GEMINI_MODEL", "gemini-2.5-pro-exp-03-25") - response = client.models.generate_content( - model=model, - contents=[prompt] - ) - response_text = response.text - - # Log the response - logger.info(f"RESPONSE: {response_text}") - - # Update cache if enabled - if use_cache: - # Load cache again to avoid overwrites - cache = {} - if os.path.exists(cache_file): + try: + # Determine which provider to use based on environment variables + using_vertex = bool(os.getenv("GEMINI_PROJECT_ID")) + using_ai_studio = bool(os.getenv("GEMINI_API_KEY")) + + # Call the LLM if not in cache or cache disabled + if using_vertex: + # Validate Vertex AI configuration + if not validate_env_for_provider("Vertex AI", ["GEMINI_PROJECT_ID", "GEMINI_LOCATION"]): + raise ValueError("Missing Vertex AI configuration. Set GEMINI_PROJECT_ID and GEMINI_LOCATION.") + + client = genai.Client( + vertexai=True, + project=os.getenv("GEMINI_PROJECT_ID"), + location=os.getenv("GEMINI_LOCATION", "us-central1") + ) + logger.info("Using Vertex AI for Gemini") + elif using_ai_studio: + # Validate AI Studio configuration + if not validate_env_for_provider("AI Studio", ["GEMINI_API_KEY"]): + raise ValueError("Missing AI Studio configuration. Set GEMINI_API_KEY.") + + client = genai.Client( + api_key=os.getenv("GEMINI_API_KEY"), + ) + logger.info("Using AI Studio for Gemini") + else: + raise ValueError( + "Google Gemini configuration not found. Please set either:\n" + "1. GEMINI_PROJECT_ID and GEMINI_LOCATION for Vertex AI, or\n" + "2. GEMINI_API_KEY for AI Studio\n" + "See SETUP.md for detailed instructions." + ) + + model = os.getenv("GEMINI_MODEL", "gemini-2.5-pro-exp-03-25") + logger.info(f"Using model: {model}") + + response = client.models.generate_content( + model=model, + contents=[prompt] + ) + response_text = response.text + + # Log the response + logger.info(f"RESPONSE: {response_text[:200]}... (truncated)") + + # Update cache if enabled + if use_cache: + # Load cache again to avoid overwrites + cache = {} + if os.path.exists(cache_file): + try: + with open(cache_file, 'r') as f: + cache = json.load(f) + except: + pass + + # Add to cache and save + cache[prompt] = response_text try: - with open(cache_file, 'r') as f: - cache = json.load(f) - except: - pass + with open(cache_file, 'w') as f: + json.dump(cache, f) + except Exception as e: + logger.error(f"Failed to save cache: {e}") - # Add to cache and save - cache[prompt] = response_text - try: - with open(cache_file, 'w') as f: - json.dump(cache, f) - except Exception as e: - logger.error(f"Failed to save cache: {e}") - - return response_text + return response_text + + except Exception as e: + logger.error(f"LLM call failed: {str(e)}") + print(f"\nError calling LLM: {str(e)}") + print("Please check your configuration and API keys. See SETUP.md for details.") + raise # # Use Anthropic Claude 3.7 Sonnet Extended Thinking -# def call_llm(prompt, use_cache: bool = True): -# from anthropic import Anthropic -# client = Anthropic(api_key=os.environ.get("ANTHROPIC_API_KEY", "your-api-key")) -# response = client.messages.create( -# model="claude-3-7-sonnet-20250219", -# max_tokens=21000, -# thinking={ -# "type": "enabled", -# "budget_tokens": 20000 -# }, -# messages=[ -# {"role": "user", "content": prompt} -# ] -# ) -# return response.content[1].text +# def call_llm(prompt, use_cache: bool = use_cache_default): +# try: +# from anthropic import Anthropic +# +# # Validate configuration +# if not validate_env_for_provider("Anthropic", ["ANTHROPIC_API_KEY"]): +# raise ValueError("Missing Anthropic API key. Set ANTHROPIC_API_KEY environment variable.") +# +# logger.info("Using Anthropic Claude") +# +# # Check cache if enabled +# if use_cache: +# cache = {} +# if os.path.exists(cache_file): +# try: +# with open(cache_file, 'r') as f: +# cache = json.load(f) +# except Exception as e: +# logger.warning(f"Failed to load cache: {e}") +# +# if prompt in cache: +# logger.info("CACHE HIT: Using cached response") +# return cache[prompt] +# +# client = Anthropic(api_key=os.environ.get("ANTHROPIC_API_KEY")) +# model = os.getenv("ANTHROPIC_MODEL", "claude-3-7-sonnet-20250219") +# logger.info(f"Using model: {model}") +# +# response = client.messages.create( +# model=model, +# max_tokens=21000, +# thinking={ +# "type": "enabled", +# "budget_tokens": 20000 +# }, +# messages=[ +# {"role": "user", "content": prompt} +# ] +# ) +# response_text = response.content[1].text +# +# # Log and cache response +# logger.info(f"RESPONSE: {response_text[:200]}... (truncated)") +# if use_cache: +# cache = {} +# if os.path.exists(cache_file): +# try: +# with open(cache_file, 'r') as f: +# cache = json.load(f) +# except: +# pass +# cache[prompt] = response_text +# try: +# with open(cache_file, 'w') as f: +# json.dump(cache, f) +# except Exception as e: +# logger.error(f"Failed to save cache: {e}") +# +# return response_text +# except Exception as e: +# logger.error(f"Anthropic LLM call failed: {str(e)}") +# print(f"\nError calling Anthropic LLM: {str(e)}") +# print("Please check your configuration and API key. See SETUP.md for details.") +# raise -# # Use OpenAI o1 -# def call_llm(prompt, use_cache: bool = True): -# from openai import OpenAI -# client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY", "your-api-key")) -# r = client.chat.completions.create( -# model="o1", -# messages=[{"role": "user", "content": prompt}], -# response_format={ -# "type": "text" -# }, -# reasoning_effort="medium", -# store=False -# ) -# return r.choices[0].message.content +# # Use OpenAI O1 +# def call_llm(prompt, use_cache: bool = use_cache_default): +# try: +# from openai import OpenAI +# +# # Validate configuration +# if not validate_env_for_provider("OpenAI", ["OPENAI_API_KEY"]): +# raise ValueError("Missing OpenAI API key. Set OPENAI_API_KEY environment variable.") +# +# logger.info("Using OpenAI") +# +# # Check cache if enabled +# if use_cache: +# cache = {} +# if os.path.exists(cache_file): +# try: +# with open(cache_file, 'r') as f: +# cache = json.load(f) +# except Exception as e: +# logger.warning(f"Failed to load cache: {e}") +# +# if prompt in cache: +# logger.info("CACHE HIT: Using cached response") +# return cache[prompt] +# +# client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY")) +# model = os.getenv("OPENAI_MODEL", "o1") +# logger.info(f"Using model: {model}") +# +# r = client.chat.completions.create( +# model=model, +# messages=[{"role": "user", "content": prompt}], +# response_format={ +# "type": "text" +# }, +# reasoning_effort=os.getenv("OPENAI_REASONING", "medium"), +# store=False +# ) +# response_text = r.choices[0].message.content +# +# # Log and cache response +# logger.info(f"RESPONSE: {response_text[:200]}... (truncated)") +# if use_cache: +# cache = {} +# if os.path.exists(cache_file): +# try: +# with open(cache_file, 'r') as f: +# cache = json.load(f) +# except: +# pass +# cache[prompt] = response_text +# try: +# with open(cache_file, 'w') as f: +# json.dump(cache, f) +# except Exception as e: +# logger.error(f"Failed to save cache: {e}") +# +# return response_text +# except Exception as e: +# logger.error(f"OpenAI LLM call failed: {str(e)}") +# print(f"\nError calling OpenAI LLM: {str(e)}") +# print("Please check your configuration and API key. See SETUP.md for details.") +# raise if __name__ == "__main__": test_prompt = "Hello, how are you?" - # First call - should hit the API - print("Making call...") - response1 = call_llm(test_prompt, use_cache=False) - print(f"Response: {response1}") - + try: + # First call - should hit the API + print("Testing LLM connection...") + response = call_llm(test_prompt, use_cache=False) + print(f"\nāœ… LLM test successful! Response: {response[:100]}...") + print("\nYour LLM setup is working correctly.") + except Exception as e: + print(f"\nāŒ LLM test failed: {e}") + print("\nPlease check your configuration and API keys.") + print("See SETUP.md for detailed setup instructions.") + sys.exit(1) +