Dyad is a free, local, and open-source app builder that lets you create AI-powered apps without writing code. It’s a privacy-friendly alternative to platforms like Lovable, v0, Bolt, and Replit—designed to run entirely on your computer, with no lock-in or vendor dependency. With built-in Supabase integration, support for any AI model (including local ones via Ollama), and seamless connection to your existing tools, Dyad makes it easy to launch full-stack apps quickly. Fast, intuitive, and open-source, Dyad is built for makers who want control, speed, and limitless creativity.
Resources
Website
Link: https://www.dyad.sh/
GitHub
Link: https://github.com/dyad-sh/dyad
Step-by-Step Process to Setup Dyad + Ollama
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deploy
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1 x H100 SXM GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy Ollama on an NVIDIA Cuda Virtual Machine. This proprietary, closed-source parallel computing platform will allow you to install Ollama on your GPU Node.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to GPUs using SSH
NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.
Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.
Now open your terminal and paste the proxy SSH IP or direct SSH IP.
Next, if you want to check the GPU details, run the command below:
nvidia-smi
Step 8: Install Ollama
After connecting to the terminal via SSH, it’s now time to install Ollama from the official Ollama website.
Website Link: https://ollama.com/
Run the following command to install the Ollama:
curl -fsSL https://ollama.com/install.sh | sh
Step 9: Serve Ollama
Run the following command to host the Ollama so that it can be accessed and utilized efficiently:
OLLAMA_HOST=0.0.0.0:11434 ollama serve
Step 10: Pull the GPT OSS 120B Model
Run the following command to pull the GPT OSS 120B Model:
ollama pull gpt-oss:120b
Wait for the download and extraction to finish until you see success
.
Step 11: Verify Downloaded Models
After pulling the GPT-OSS models, you can check that they’ve been successfully downloaded and are available on your system.
Just run:
ollama list
You should see output like this:
NAME ID SIZE MODIFIED
gpt-oss:120b 735371f916a9 65 GB 50 seconds ago
Step 12: Set Up SSH Port Forwarding (For Remote Models Like Ollama on a GPU VM)
If you’re running a model like Ollama on a remote GPU Virtual Machine (e.g. via NodeShift, AWS, or your own server), you’ll need to port forward the Ollama server to your local machine so Dyad can connect to it.
Here’s how to do it:
Example (Mac/Linux Terminal):
ssh -L 11434:localhost:11434 root@<your-vm-ip> -p <your-ssh-port>
Once connected, your local machine will treat http://localhost:11434
as if Ollama is running locally.
- Replace
<your-vm-ip>
with your VM’s IP address
- Replace
<your-ssh-port>
with the custom port (e.g. 19257
)
On Windows:
Use a tool like PuTTY or ssh
from WSL/PowerShell with similar port forwarding.
If you’re running large language models (like GPT-OSS 120b) on a remote GPU Virtual Machine, you’ll want Dyad on your local machine to talk to that remote Ollama instance.
But since the model is running on the VM — not on your laptop — we need to bridge the gap.
That’s where SSH port forwarding comes in.
Why use a GPU VM?
Large models require serious compute power. Your laptop might struggle or overheat trying to run them. So we spin up a GPU-powered VM in the cloud — it gives us:
- Faster responses
- Support for large models (7B, 13B, even 120B!)
- More RAM + VRAM for smoother inference
Step 13: Download Dyad
To get started with Dyad, you’ll need to download the installer from the official website:
- Open your web browser (Google Chrome, Safari, Firefox, or Edge).
- In the search bar, type “Dyad app” and press Enter.
- From the search results, click on the link to the official Dyad website (look for the domain that says it’s the official site).
- On the homepage, locate the “Download Dyad” button at the top right or center of the page.
- Select the correct version for your operating system:
- macOS (Apple Silicon or Intel)
- Windows
- Linux (if available)
- Click the button to start the download. The file will automatically save to your computer’s default download folder.
- Once the download is complete, you’re ready to move on to installation.
Tip: Dyad is free, open-source, and works without vendor lock-in. It supports building full-stack AI apps with Supabase integration and can connect with popular models like Gemini, GPT, and Claude.
Step 14: Set Up Dyad for the First Time
Once Dyad is installed and launched, you’ll see a setup screen that helps you prepare your environment for building apps. Follow these steps carefully:
- Install Node.js (App Runtime)
- Dyad requires Node.js to run your applications locally.
- If Node.js is already installed on your machine, Dyad will detect it automatically and mark this step as complete (green check).
- If not, you’ll be prompted to download and install Node.js. Simply follow the link provided, install the latest LTS version, and restart Dyad.
- Setup AI Model Access
- To generate and run apps, Dyad needs access to AI providers. You can connect one or multiple providers:
- Google Gemini – Click “Setup Google Gemini API Key” to use Gemini for free. You’ll be redirected to create or retrieve your API key, then paste it back into Dyad.
- Other AI Providers – If you want more options, click “Setup other AI providers.” Dyad supports OpenAI, Anthropic, OpenRouter, and more. Enter the corresponding API keys in the fields provided.
- Import or Start a New App
- Once setup is complete, you can either:
- Click “Import App” to load an existing Dyad project.
- Or, type your idea directly in the “Ask Dyad to build…” box. For example, enter “Build a To-Do List App” or “Build a Recipe Finder App.”
- Choose from Starter Templates (Optional)
- Dyad also provides quick templates such as To-Do List App, Virtual Avatar Builder, Recipe Finder & Meal Planner, AI Image Generator, or 3D Portfolio Viewer.
- Select one to quickly spin up a project and start experimenting.
Tip: You can always switch between models (Auto/Pro) based on your needs and API access. Auto uses free/available models, while Pro unlocks premium capabilities.
Step 15: Configure AI Providers in Dyad
To enable Dyad to build and run apps, you need to connect it with one or more AI providers. This allows Dyad to generate code using different models.
- Open Settings → AI → Model Providers
- On the left sidebar, click Settings, then select AI > Model Providers.
- You’ll see a list of supported providers: OpenAI, Anthropic, Google (Gemini), OpenRouter, Dyad, and an option to add a custom provider.
- Choose Your Provider
- Google (Gemini) – Offers a free tier. Click Setup and follow the link to get your API key. Paste it into the input field in Dyad.
- OpenAI – If you have an API key, click Setup, then paste your key to enable GPT models.
- Anthropic – Enter your Claude API key if you use Anthropic.
- OpenRouter – Supports multiple models with a free tier. Setup is similar — retrieve your key from OpenRouter and paste it.
- Dyad – If you prefer, you can set up Dyad’s native model.
- Custom Provider – Advanced users can connect any LLM endpoint by clicking Add custom provider and entering endpoint details + API key.
- Enable Telemetry (Optional)
- Telemetry is enabled by default to anonymously record usage data and improve Dyad. You can toggle it ON or OFF based on your preference.
- Enable Native Git (Optional)
- Under Experiments, you can enable Native Git for faster version control. This requires installing Git on your system if not already installed.
- Save & Verify
- Once you enter API keys, Dyad will validate them.
- If successful, the status will change from “Needs Setup” to Active.
- You’re now ready to start building apps with your chosen AI models.
Tip: You can set up multiple providers and switch between them depending on which model you want to use for a project.
Step 16: Add a Custom AI Provider
If you want Dyad to use a language model that isn’t listed (e.g., a self-hosted model, private API, or enterprise endpoint), you can configure it as a Custom Provider.
- Click “Add Custom Provider”
- In the AI Providers section of the Settings menu, select Add Custom Provider.
- A setup form will appear (like in the screenshot).
- Fill Out Provider Details
- Provider ID – A unique identifier without spaces (e.g.,
my-provider
).
- Display Name – The friendly name you want to appear in Dyad’s interface (e.g.,
My Enterprise LLM
).
- API Base URL – The root URL of the model’s API (e.g.,
https://api.example.com/v1
).
- Environment Variable (Optional) – If you want Dyad to reference a stored API key, enter its environment variable name here (e.g.,
MY_PROVIDER_API_KEY
).
- Authentication
- Make sure the API key or token required by the provider is properly stored in your system’s environment variables.
- If not using environment variables, Dyad may prompt you to input the key directly when connecting.
- Save the Provider
- Once all fields are complete, click Add Provider.
- The provider will appear alongside OpenAI, Anthropic, Google, and others in your Model Providers list.
- Test the Connection
- After adding, Dyad will validate the provider by making a test API call.
Tip: This feature is powerful if you’re hosting open-source models locally, using private APIs like vLLM, or experimenting with custom endpoints. It gives you full flexibility without vendor lock-in.
Step 17: Connect Dyad with Ollama
Now that you’ve filled out the Add Custom Provider form for Ollama:
- Enter Provider Details
- Provider ID:
ollama
- Display Name:
ollama
(or any friendly name you prefer).
- API Base URL:
http://localhost:11434/v1
- This points Dyad to the local Ollama server that runs on port
11434
.
- Save the Provider
- Click Add Provider to save the configuration.
- You should now see Ollama listed as an active provider in your Dyad AI Providers panel.
- Run Ollama Locally
- Make sure Ollama is running on your machine. Start the Ollama server by opening a terminal and running:
ollama serve
- This ensures Dyad can connect to the Ollama API at
localhost:11434
.
4. Test the Connection
- In Dyad, try generating a simple app idea (e.g., “Build a To-Do List app”).
- If the connection is successful, Dyad will use Ollama to generate the project code.
Step 18: Add Ollama Models in Dyad (and verify)
Now that Configure ollama shows Setup Complete, make the actual models available to Dyad.
- Make sure Ollama is running
ollama serve
2. Register a model in Dyad
- In Settings → AI → Model Providers → ollama → Models, click Add Custom Model.
- Fill in:
- Model ID: the exact Ollama model name (e.g.,
llama3:8b
).
- Display Name: anything friendly (e.g.,
Llama 3 (8B)
).
- Context Window: optional (set if you know it; otherwise leave blank).
- Max Output Tokens: optional (e.g.,
1024
).
- Save. Repeat for any other Ollama models you want exposed.
Step 19: Add and Register a Custom Model in Dyad
- Fill Out the Model Details
- Model ID:
gpt-oss:120b
- This must exactly match the model name available in your Ollama installation.
- Name:
gpt-oss
(this is the display name that will appear in Dyad).
- Description (Optional): You can write something like “Open-source GPT OSS 120B model via Ollama”.
- Max Output Tokens (Optional): e.g.,
4096
(or adjust based on model capability).
- Context Window (Optional): e.g.,
8192
.
- Save the Model
- Click Add Model.
- The model will now appear under Models in the Ollama provider section.
Step 20: Build your first Dyad app with gpt-oss (Ollama)
Now that gpt-oss:120b shows up under Models and Ollama is Setup Complete, let’s generate an app.
Step 21: Select ollama → gpt-oss in the Builder and generate
- Open the model picker
- In the build screen (the bar above “Ask Dyad to build…”), click the Model dropdown.
- Choose the local provider
- Navigate to Local models → ollama (or directly ollama in the list).
- Pick your model
- Select gpt-oss (the one you registered as
gpt-oss:120b
).
- Optional: switch Auto → Pro if you want Dyad to always use your chosen model without auto-switching.
- Set generation options (optional)
- Click the small settings/gear near the prompt bar:
- Max output tokens: 2048–4096 (for long code generations).
- Temperature: 0.2–0.5 for reliable code; raise for creativity.
- Context window / system prompt: leave default unless you need custom guardrails.
- Prompt Dyad to build
- In “Ask Dyad to build…”, paste a concrete request, e.g.:
Build a Newsletter Creator:
- Tech stack: React + Vite + Tailwind
- Features: editor with markdown preview, save drafts to localStorage, export to HTML/Markdown, simple dark UI, keyboard shortcuts
- Include README with setup & run steps
- Hit Send (paper-plane). Review the plan → Accept.
Run and iterate
- When scaffolding completes, click Run (or open terminal) and follow the start script (usually
npm install && npm run dev
).
- Iterate with follow-up prompts: “add image upload”, “add tags & search”, “deploy-ready build script”, etc.
If the model dropdown doesn’t show ollama/gpt-oss:
- Ensure
ollama serve
is running and the model exists (ollama list
).
- Recheck the base URL
http://localhost:11434/v1
in Settings → AI → Model Providers → ollama.
- If using a remote VM, use
http://<VM-IP>:11434/v1
or tunnel via SSH: ssh -L 11434:localhost:11434 user@VM
.
In this video, I walk through the entire process of setting up and using Dyad with Ollama as the custom AI provider. Starting from downloading and installing Dyad, I show how to configure Node.js, connect API providers, and register a custom model inside Ollama (gpt-oss:120b
). The video captures each step clearly—adding the API base URL, activating Ollama, registering the model in Dyad, and finally selecting it from the model picker. To demonstrate the workflow, I use Dyad’s builder interface to generate a project, including an AI Image Generator app, showing how prompts translate into scaffolded code in real time. By the end, viewers can see a complete pipeline: from local model setup → integration in Dyad → running their first functional AI app without vendor lock-in.
Conclusion
Dyad makes building AI-powered apps simple, fast, and completely under your control. By combining it with Ollama on a GPU-powered VM, you unlock the ability to run powerful open-source models locally or remotely—without vendor lock-in. Whether you’re a developer, a tinkerer, or someone exploring no-code AI tools, Dyad gives you the flexibility to prototype, build, and scale apps in minutes. With this setup, you now have a private, efficient, and future-proof way to turn your ideas into fully functional apps.