From a1689783faf8271d59c8863328e317d2bcf7cd7c Mon Sep 17 00:00:00 2001 From: Shashankgupta581993 <80823787+Shashankgupta581993@users.noreply.github.com> Date: Tue, 3 Mar 2026 09:57:57 +0530 Subject: [PATCH] Fixing typo --- getting-started/build_with_llama_4.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/getting-started/build_with_llama_4.ipynb b/getting-started/build_with_llama_4.ipynb index 30d7aaa9f..231820bda 100644 --- a/getting-started/build_with_llama_4.ipynb +++ b/getting-started/build_with_llama_4.ipynb @@ -52,7 +52,7 @@ "* Maverick which has 17B x 128 Experts MoE \n", "\n", "Long context window : \n", - "-- If you want to used this model on Single GPU with 10M context = Need to use an INT4-quantized version of Llama 4 Scout on 1xH100 GPU \n", + "-- If you want to use this model on Single GPU with 10M context = Need to use an INT4-quantized version of Llama 4 Scout on 1xH100 GPU \n", "\n", "\n", "Hardware requirements without quantized models :\n",