LM Studio & Ollama: Local LLM Setup Guide

LM Studio & Ollama: Local LLM Setup Guide

Introduction

Running large language models (LLMs) locally is becoming increasingly popular among developers, AI enthusiasts, and privacy-conscious users. Tools like LM Studio and Ollama make it easy to install and run advanced models (such as LLaMA, Mistral, and Gemma) directly on your machine without cloud dependencies.

In this guide, you’ll learn how to install LM Studio and Ollama on Windows, macOS, and Linux, and how to set up your first model for local use.


1. What Are LM Studio and Ollama?

  • LM Studio:
    A user-friendly desktop application that allows you to download, run, and manage open-source language models locally. It provides a clean interface for prompt testing and fine-tuning.
  • Ollama:
    A command-line tool (with a simple UI) designed for efficient local model serving. Ollama lets you run LLMs with minimal setup and integrate them into your development workflows.

Both tools support popular models like LLaMA 3, Mistral, Phi-3, and Gemma.


2. Prerequisites

Before installation, ensure you have:

  • A computer with at least 8 GB RAM (16 GB+ recommended for larger models)
  • Stable internet connection for downloading models
  • Administrator privileges (for installation)
  • Optional: GPU support for improved performance

3. Installing LM Studio

a. On macOS

  1. Visit https://lmstudio.ai
  2. Download the macOS .dmg file
  3. Drag LM Studio to the Applications folder
  4. Launch the app and allow permissions if prompted
  5. (Optional) Enable GPU acceleration via settings

b. On Windows

  1. Download the .exe installer from LM Studio’s website
  2. Run the installer and follow the wizard
  3. Open LM Studio from the Start Menu
  4. Configure your model directory and enable GPU support if available

c. On Linux (Experimental)

  1. Download the .AppImage file
  2. Run: chmod +x LM-Studio.AppImage ./LM-Studio.AppImage
  3. Allow dependencies to install when prompted.

👉 Note: Some distributions may require additional libraries.


4. Installing Ollama

a. On macOS

curl -fsSL https://ollama.com/install.sh | sh

After installation, verify:

ollama --version

b. On Windows

  1. Download the Windows installer from https://ollama.com
  2. Follow the setup wizard
  3. Open Command Prompt or PowerShell: ollama run llama3

c. On Linux

curl -fsSL https://ollama.com/install.sh | sh

To verify:

ollama list

Ollama automatically runs a background service after installation.


5. Downloading and Running Your First Model

LM Studio

  • Open LM Studio
  • Go to “Explore Models”
  • Select a model (e.g., LLaMA 3, Mistral)
  • Click Download and then Launch Chat

Ollama

ollama pull mistral
ollama run mistral

You’ll get an interactive prompt where you can start chatting with the model locally.


6. Tips for Better Performance

  • Use GPU if your system supports it.
  • Close unused apps to free up memory.
  • Use smaller models if you have less RAM.
  • Regularly update LM Studio and Ollama for performance improvements.

7. Common Troubleshooting

IssueSolution
Installation failedRun installer as admin and check internet connection
Model won’t loadCheck RAM/GPU availability, or try a smaller model
Port conflict (Ollama)Kill background service or change port
Linux errorsInstall missing libraries or use AppImage version

8. Next Steps

Once your local LLM is running:

  • Integrate it into VS Code or other IDEs
  • Use Ollama’s API to build AI applications
  • Explore fine-tuning and model configuration

📚 Bonus Tip: LM Studio provides a GUI, while Ollama is more suited for developers who prefer CLI and API integrations.


Conclusion

Installing LM Studio and Ollama allows anyone to run local LLMs securely and efficiently on their own hardware. Whether you’re a developer building AI apps or just experimenting, these tools provide an excellent starting point.

🚀 Pro Tip: Combine both — use LM Studio for testing and Ollama for production or integration with other apps.

You Might Also Like

🛠️ Recommended Tools for Developers & Tech Pros

Save time, boost productivity, and work smarter with these AI-powered tools I personally use and recommend:

1️⃣ CopyOwl.ai – Research & Write Smarter
Write fully referenced reports, essays, or blogs in one click.
✅ 97% satisfaction • ✅ 10+ hrs saved/week • ✅ Academic citations

2️⃣ LoopCV.pro – Build a Job-Winning Resume
Create beautiful, ATS-friendly resumes in seconds — perfect for tech roles.
✅ One-click templates • ✅ PDF/DOCX export • ✅ Interview-boosting design

3️⃣ Speechify – Listen to Any Text
Turn articles, docs, or PDFs into natural-sounding audio — even while coding.
✅ 1,000+ voices • ✅ Works on all platforms • ✅ Used by 50M+ people

4️⃣ Jobright.ai – Automate Your Job Search
An AI job-search agent that curates roles, tailors resumes, finds referrers, and can apply for jobs—get interviews faster.
✅ AI agent, not just autofill – ✅ Referral insights – ✅ Faster, personalized matching