AI Development Environment Setup
CPSC 436C: Cloud Computing | 2025 Winter Term 1
This guide helps you set up a local AI development environment for critical collaboration and model comparison.
🎯 Your Goal
Yyou should be able to:
- Run AI models locally (not just web interfaces). Note: “locally” just means a machine you control.
- Compare responses across different models
- Document your AI interactions for course projects
- Critically evaluate AI outputs for technical accuracy
⚠️ Important Notes
- Time Investment: Plan 1-2 hours for initial setup
- Disk Space: You’ll need 5-10GB free space for models
- Internet: Initial model downloads can be large (1-4GB each)
- Help Available: TA office hours specifically for setup support
First, let’s get a modern Python environment. We recommend uv (modern) but pip works too.
Using uv (Recommended)
curl -LsSf https://astral.sh/uv/install.sh | sh
# Create project directory
mkdir ~/cpsc436c-ai
cd ~/cpsc436c-ai
# Initialize Python project
uv init
uv add litellm
Using pip (Traditional)
mkdir ~/cpsc436c-ai
cd ~/cpsc436c-ai
# Create virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install requirements
pip install litellm
python -c "import litellm; print('Success')"
Choose ONE of these options to run models locally:
Option A: Ollama
- Download from ollama.ai
- Install and start Ollama
- Download a model:
ollama pull llama3.2:3b
# Or a more capable model (larger download)
ollama pull llama3.1:8b
Option B: LM Studio
- Download from lmstudio.ai
- Install and launch LM Studio
- Browse and download a model (try llama-3.2-3b-instruct for starting)
- Start the local server in LM Studio
Set up litellm proxy to compare different models:
cat > config.yaml << EOF
model_list:
– model_name: local-llama
litellm_params:
model: ollama/llama3.2:3b
api_base: http://localhost:11434
– model_name: gpt-5
litellm_params:
model: gpt-5
# Add your OpenAI API key if you have one
– model_name: claude-sonnet
litellm_params:
model: claude-sonnet-4-20250514
# Add your Anthropic API key if you have one EOF
# Start the proxy
litellm –config config.yaml
http://localhost:4000 to see the litellm interface.
Create a simple test to verify everything works:
cat > test.py << EOF
import litellm
# Test local model
response = litellm.completion(
model=”ollama/llama3.2:3b”,
messages=[{“role”: “user”, “content”: “Explain serverless vs containers in 2 sentences”}],
api_base=”http://localhost:11434″
)
print(“Local model response:”, response.choices[0].message.content)
EOF
# Run the test
python test.py
For Thursday’s class, try this exercise:
- Ask your local model: “What’s the best database for my project?”
- Note the generic response you get
- Ask: “I need a database for unpredictable traffic, AWS free tier, fast key-value lookups for a URL shortener. Compare DynamoDB vs RDS Aurora Serverless with cost and performance trade-offs.”
- Compare the quality of responses
- Write 2-3 sentences about what you learned
🆘 Need Help?
- TA Office Hours: Dedicated AI setup support sessions
- Discord: #setup channel for setup questions
- Common Issues: Check pinned messages in Discord for solutions
- Alternative: If local setup fails, you can use web interfaces temporarily, but document what didn’t work
Remember: The goal is professional AI collaboration skills, not perfect technical setup. If you’re struggling, document the problems – that’s valuable learning too!
Recent Comments