Home » AI Development Environment Setup

Recent Posts

Recent Comments

Archives

Categories

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 51 other subscribers
January 2026
S M T W T F S
 123
45678910
11121314151617
18192021222324
25262728293031

AI Development Environment Setup

CPSC 436C – AI Development Environment Setup

AI Development Environment Setup

CPSC 436C: Cloud Computing | 2025 Winter Term 1

This guide helps you set up a local AI development environment for critical collaboration and model comparison.

🎯 Your Goal

Yyou should be able to:

  • Run AI models locally (not just web interfaces). Note: “locally” just means a machine you control.
  • Compare responses across different models
  • Document your AI interactions for course projects
  • Critically evaluate AI outputs for technical accuracy

⚠️ Important Notes

  • Time Investment: Plan 1-2 hours for initial setup
  • Disk Space: You’ll need 5-10GB free space for models
  • Internet: Initial model downloads can be large (1-4GB each)
  • Help Available: TA office hours specifically for setup support
1 Python Environment Setup

First, let’s get a modern Python environment. We recommend uv (modern) but pip works too.

Using uv (Recommended)

# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh
# Create project directory
mkdir ~/cpsc436c-ai
cd ~/cpsc436c-ai
# Initialize Python project
uv init
uv add litellm

Using pip (Traditional)

# Create project directory
mkdir ~/cpsc436c-ai
cd ~/cpsc436c-ai
# Create virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install requirements
pip install litellm
✅ Verification: Run python -c "import litellm; print('Success')"
2 Local AI Models Setup

Choose ONE of these options to run models locally:

Option A: Ollama

  1. Download from ollama.ai
  2. Install and start Ollama
  3. Download a model:
# Start with a smaller model for testing
ollama pull llama3.2:3b
# Or a more capable model (larger download)
ollama pull llama3.1:8b

Option B: LM Studio

  1. Download from lmstudio.ai
  2. Install and launch LM Studio
  3. Browse and download a model (try llama-3.2-3b-instruct for starting)
  4. Start the local server in LM Studio
✅ Verification: You should be able to chat with your local model through the respective interface.
3 Model Comparison Setup

Set up litellm proxy to compare different models:

# Create a config file
cat > config.yaml << EOF
model_list:
– model_name: local-llama
litellm_params:
model: ollama/llama3.2:3b
api_base: http://localhost:11434

– model_name: gpt-5
litellm_params:
model: gpt-5
# Add your OpenAI API key if you have one

– model_name: claude-sonnet
litellm_params:
model: claude-sonnet-4-20250514
# Add your Anthropic API key if you have one EOF
# Start the proxy
litellm –config config.yaml
✅ Verification: Visit http://localhost:4000 to see the litellm interface.
4 Test Your Setup

Create a simple test to verify everything works:

# Create test.py
cat > test.py << EOF
import litellm

# Test local model
response = litellm.completion(
model=”ollama/llama3.2:3b”,
messages=[{“role”: “user”, “content”: “Explain serverless vs containers in 2 sentences”}],
api_base=”http://localhost:11434″
)
print(“Local model response:”, response.choices[0].message.content)
EOF

# Run the test
python test.py
✅ Verification: You should get a reasonable 2-sentence explanation about serverless vs containers.
5 Document Your First AI Collaboration

For Thursday’s class, try this exercise:

  • Ask your local model: “What’s the best database for my project?”
  • Note the generic response you get
  • Ask: “I need a database for unpredictable traffic, AWS free tier, fast key-value lookups for a URL shortener. Compare DynamoDB vs RDS Aurora Serverless with cost and performance trade-offs.”
  • Compare the quality of responses
  • Write 2-3 sentences about what you learned
✅ Verification: Bring your documented comparison to Thursday’s class.

🆘 Need Help?

  • TA Office Hours: Dedicated AI setup support sessions
  • Discord: #setup channel for setup questions
  • Common Issues: Check pinned messages in Discord for solutions
  • Alternative: If local setup fails, you can use web interfaces temporarily, but document what didn’t work

Remember: The goal is professional AI collaboration skills, not perfect technical setup. If you’re struggling, document the problems – that’s valuable learning too!