In the world of open-source AI, very few models come close to rivaling the intellectual firepower of proprietary giants, until now. Introducing Qwen3-235B-A22B-Thinking-2507, a frontier model in the realm of thinking-capable language models. Engineered by Alibaba Cloud, this 235B-parameter behemoth, 22B of which are actively used per inference, excels in high-level reasoning, mathematical problem solving, scientific logic, and advanced coding tasks. With its unprecedented 256K context length, this model is built not just for chat, but for deep, extended reasoning across massive documents and chains of logic. From dominating benchmarks like AIME25 and HMMT25 to outperforming Claude Opus in reasoning-heavy scenarios, Qwen Thinking isn’t just another LLM, it’s a state-of-the-art brain built for intellectual rigor. And the best part? You can now run it locally.
Let’s walk through how to install and harness this thinking model right from your own machine.
Prerequisites
The minimum system requirements for running this model are:
- GPU: 1x RTX4090 or 1x RTX A6000
- Storage: 50 GB (preferable)
- VRAM: at least 16 GB
- Anaconda installed
Step-by-step process to install and run Qwen Thinking
For the purpose of this tutorial, we’ll use a GPU-powered Virtual Machine by NodeShift since it provides high compute Virtual Machines at a very affordable cost on a scale that meets GDPR, SOC2, and ISO27001 requirements. Also, it offers an intuitive and user-friendly interface, making it easier for beginners to get started with Cloud deployments. However, feel free to use any cloud provider of your choice and follow the same steps for the rest of the tutorial.
Step 1: Setting up a NodeShift Account
Visit app.nodeshift.com and create an account by filling in basic details, or continue signing up with your Google/GitHub account.
If you already have an account, login straight to your dashboard.
Step 2: Create a GPU Node
After accessing your account, you should see a dashboard (see image), now:
- Navigate to the menu on the left side.
- Click on the GPU Nodes option.
- Click on Start to start creating your very first GPU node.
These GPU nodes are GPU-powered virtual machines by NodeShift. These nodes are highly customizable and let you control different environmental configurations for GPUs ranging from H100s to A100s, CPUs, RAM, and storage, according to your needs.
Step 3: Selecting configuration for GPU (model, region, storage)
- For this tutorial, we’ll be using 1x RTX A6000 GPU, however, you can choose any GPU as per the prerequisites.
- Similarly, we’ll opt for 200GB storage by sliding the bar. You can also select the region where you want your GPU to reside from the available ones.
Step 4: Choose GPU Configuration and Authentication method
- After selecting your required configuration options, you’ll see the available GPU nodes in your region and according to (or very close to) your configuration. In our case, we’ll choose a 1x RTX A6000 48GB GPU node with 64vCPUs/63GB RAM/200GB SSD.
2. Next, you’ll need to select an authentication method. Two methods are available: Password and SSH Key. We recommend using SSH keys, as they are a more secure option. To create one, head over to our official documentation.
Step 5: Choose an Image
The final step is to choose an image for the VM, which in our case is Nvidia Cuda.
That’s it! You are now ready to deploy the node. Finalize the configuration summary, and if it looks good, click Create to deploy the node.
Step 6: Connect to active Compute Node using SSH
- As soon as you create the node, it will be deployed in a few seconds or a minute. Once deployed, you will see a status Running in green, meaning that our Compute node is ready to use!
- Once your GPU shows this status, navigate to the three dots on the right, click on Connect with SSH, and copy the SSH details that appear.
As you copy the details, follow the below steps to connect to the running GPU VM via SSH:
- Open your terminal, paste the SSH command, and run it.
2. In some cases, your terminal may take your consent before connecting. Enter ‘yes’.
3. A prompt will request a password. Type the SSH password, and you should be connected.
Output:
Next, If you want to check the GPU details, run the following command in the terminal:
!nvidia-smi
Step 7: Set up the project environment with dependencies
- Create a virtual environment using Anaconda.
conda create -n qwen-thinking python=3.11 -y && conda activate qwen-thinking
Output:
2. Once you’re inside the environment, install dependencies.
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
pip install --upgrade transformers accelerate einops
Output:
3. Install PyTorch, transformers and other python packages.
pip install torch torchvision torchaudio
pip install einops timm pillow
pip install transformers==4.47.0 git+https://github.com/huggingface/accelerate
pip install git+https://github.com/huggingface/diffusers
pip install huggingface_hub
pip install sentencepiece bitsandbytes protobuf decord numpy ffmpeg
4. Install and run jupyter notebook.
conda install -c conda-forge --override-channels notebook -y
conda install -c conda-forge --override-channels ipywidgets -y
jupyter notebook --allow-root
5. If you’re on a remote machine (e.g., NodeShift GPU), you’ll need to do SSH port forwarding in order to access the jupyter notebook session on your local browser.
Run the following command in your local terminal after replacing:
<YOUR_SERVER_PORT>
with the PORT allotted to your remote server (For the NodeShift server – you can find it in the deployed GPU details on the dashboard).
<PATH_TO_SSH_KEY>
with the path to the location where your SSH key is stored.
<YOUR_SERVER_IP>
with the IP address of your remote server.
ssh -L 8888:localhost:8888 -p <YOUR_SERVER_PORT> -i <PATH_TO_SSH_KEY> root@<YOUR_SERVER_IP>
Output:
After this copy the URL you received in your remote server:
And paste this on your local browser to access the Jupyter Notebook session.
Step 8: Download and Run the model
- Open a Python notebook inside Jupyter.
2. Download the model checkpoints.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-235B-A22B-Thinking-2507"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
Output:
Conclusion
Installing Qwen3-235B-A22B-Thinking-2507 locally isn’t just a technical feat, it’s a gateway to unlocking one of the most advanced open-source reasoning models available today. In this guide, we explored what makes this model a powerhouse for logical reasoning, coding, and long-context understanding, and how its “thinking mode” elevates it far beyond conventional LLMs. NodeShift Cloud played a pivotal role by simplifying the deployment process, offering the compute muscle and flexible infrastructure needed to run such a massive model seamlessly. Whether you’re experimenting, building, or benchmarking, NodeShift makes it easier than ever to bring cutting-edge AI capabilities right to your fingertips.