Skip to content

Commit 42ff8ed

Browse files
authored
fix(docs): correct image path (#4268)
* fix(docs): correct image path for e2e_flow_part1.png in pretraining tutorial * fix image path in finetuning.rst * fix image path in serving.rst * fix image path in sparsity.rst * Fix image path for loss curves in pretraining.rst
1 parent b5e6aeb commit 42ff8ed

4 files changed

Lines changed: 6 additions & 6 deletions

File tree

docs/source/contributing/sparsity.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ If you can get your dense matrix into a **2:4 sparse format**, we can speed up m
5353
This also allows users with existing sparse weights in a dense format to take advantage of our fast sparse kernels. We anticipate many users to come up with their own custom frontend masking solution or to use another third party solution, as this is an active area of research.
5454

5555

56-
.. image:: ../static/pruning_ecosystem_diagram.png
56+
.. image:: ../../static/pruning_ecosystem_diagram.png
5757
:alt: pruning_flow
5858

5959

@@ -109,7 +109,7 @@ In order to avoid confusion, we generally try to use sparsity to refer to tensor
109109
Roughly, the flow for achieving a more performant pruned model looks like this:
110110

111111

112-
.. image:: ../static/pruning_flow.png
112+
.. image:: ../../static/pruning_flow.png
113113
:alt: flow
114114

115115

docs/source/eager_tutorials/finetuning.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ techniques integrated into our partner frameworks. This is part 2 of 3
77
such tutorials showcasing this end-to-end flow, focusing on the
88
fine-tuning step.
99

10-
.. image:: ../static/e2e_flow_part2.png
10+
.. image:: ../../static/e2e_flow_part2.png
1111

1212
Fine-tuning is an important step for adapting your pre-trained model
1313
to more domain-specific data. In this tutorial, we demonstrate 3 model

docs/source/eager_tutorials/pretraining.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ techniques integrated into our partner frameworks. This is part 1 of 3
77
such tutorials showcasing this end-to-end flow, focusing on the
88
pre-training step.
99

10-
.. image:: ../static/e2e_flow_part1.png
10+
.. image:: ../../static/e2e_flow_part1.png
1111

1212
Pre-training with float8 using torchao can provide `up to 1.5x speedups <https://pytorch.org/blog/training-using-float8-fsdp2/>`__ on 512 GPU clusters,
1313
and up to `1.34-1.43x speedups <https://pytorch.org/blog/accelerating-large-scale-training-and-convergence-with-pytorch-float8-rowwise-on-crusoe-2k-h200s/>`__ on 2K H200 clusters with the latest `torchao.float8` rowwise recipe.
@@ -116,7 +116,7 @@ This is because rowwise scaling using a more granular scaling factor (per row, i
116116

117117
Below you can see the loss curves comparing bfloat16, float8 tensorwise, and float8 rowwise training for training Llama3-8B on 8xH100 GPUs:
118118

119-
.. image:: ../static/fp8-loss-curves.png
119+
.. image:: ../../static/fp8-loss-curves.png
120120
:alt: Loss curves for training Llama3-8B on 8xH100s with torchtitan using bfloat16, float8 tensorwise, and float8 rowwise training.
121121

122122

docs/source/eager_tutorials/serving.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33

44
TorchAO provides an end-to-end pre-training, fine-tuning, and serving model optimization flow by leveraging our quantization and sparsity techniques integrated into our partner frameworks. This is part 3 of 3 such tutorials showcasing this end-to-end flow, focusing on the serving step.
55

6-
.. image:: ../static/e2e_flow_part3.png
6+
.. image:: ../../static/e2e_flow_part3.png
77

88
This tutorial demonstrates how to perform post-training quantization and deploy models for inference using torchao as the underlying optimization engine, seamlessly integrated through HuggingFace Transformers, vLLM, and ExecuTorch.
99

0 commit comments

Comments
 (0)