diff --git a/getting-started/build_with_llama_4.ipynb b/getting-started/build_with_llama_4.ipynb index 30d7aaa9f..231820bda 100644 --- a/getting-started/build_with_llama_4.ipynb +++ b/getting-started/build_with_llama_4.ipynb @@ -52,7 +52,7 @@ "* Maverick which has 17B x 128 Experts MoE \n", "\n", "Long context window : \n", - "-- If you want to used this model on Single GPU with 10M context = Need to use an INT4-quantized version of Llama 4 Scout on 1xH100 GPU \n", + "-- If you want to use this model on Single GPU with 10M context = Need to use an INT4-quantized version of Llama 4 Scout on 1xH100 GPU \n", "\n", "\n", "Hardware requirements without quantized models :\n",