Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
20 commits
Select commit Hold shift + click to select a range
488e4ea
Rewrite vision section: new how-tos, migrated references, runtime fixes
shannonbradshaw Apr 14, 2026
c88531b
Split landing into Overview page; prose editing pass; registry linking
shannonbradshaw Apr 14, 2026
8ab59e6
Update add-resource UI flow: Configuration block, Add component
shannonbradshaw Apr 14, 2026
5c462b3
Verify UI instructions against current app code
shannonbradshaw Apr 15, 2026
f5f75c4
Fix missing first Add component click across vision how-tos
shannonbradshaw Apr 15, 2026
c37b94f
Fix UI labels in alert-on-detections steps
shannonbradshaw Apr 15, 2026
ad356ac
Add scripts/check-ui-labels.py for known-wrong UI strings
shannonbradshaw Apr 15, 2026
81b44d0
Systematic flow-doc verification pass on vision section
shannonbradshaw Apr 15, 2026
aa06fa1
Apply writing playbook to vision section
shannonbradshaw Apr 15, 2026
deeb752
Reorganize vision section into subsections
shannonbradshaw Apr 15, 2026
f31626d
Update cross-section links after vision restructure
shannonbradshaw Apr 15, 2026
65f6239
Address reviewer feedback on vision section
shannonbradshaw Apr 15, 2026
53c45df
Add Builder tab to configure.md step 2; split available-models page
shannonbradshaw Apr 15, 2026
11e3a22
Clarify one-API/multi-models framing; rename how-it-works title
shannonbradshaw Apr 15, 2026
2afdbd3
Turn available-models page into a registry guide; cut widgets
shannonbradshaw Apr 15, 2026
2df4111
Merge upstream/new-docs-site into review/vision
shannonbradshaw Apr 15, 2026
cfcefe5
check-ui-labels: skip files inherited unchanged during a merge
shannonbradshaw Apr 15, 2026
8bf4ab7
Fix htmltest link failures after merge
shannonbradshaw Apr 15, 2026
16bd169
Merge upstream/new-docs-site (#4938) into review/vision
shannonbradshaw Apr 15, 2026
b9a9ab9
Merge upstream/new-docs-site (#4939 deploy fix) into review/vision
shannonbradshaw Apr 15, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/monitor/alert.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ To monitor machine health metrics like CPU usage, memory, and temperature, add a
{{< tabs >}}
{{% tab name="Linux" %}}

On your machine's **CONFIGURE** page, click the **+** icon next to your machine part and select **Component or service**.
On your machine's **CONFIGURE** page, click the **+** icon next to your machine part and select **Configuration block**.
Search for and add the `hwmonitor:cpu_monitor` model from the [`sbc-hwmonitor`](https://app.viam.com/module/rinzlerlabs/sbc-hwmonitor) module.

You can add additional sensors for memory, temperature, and other metrics.
Expand Down Expand Up @@ -325,5 +325,5 @@ viam machines part delete-trigger --part <part-name-or-id> --name <trigger-name>
## Other alert types

- For alerts based on data sync events (not tied to machine health), see [Trigger on data events](/data/trigger-on-data/).
- For alerts when an ML model detects specific objects or classifications, see [Alert on detections](/vision/alert-on-detections/).
- For alerts when an ML model detects specific objects or classifications, see [Alert on detections](/vision/object-detection/alert-on-detections/).
- For full trigger configuration reference, see [Trigger configuration](/reference/triggers/).
4 changes: 2 additions & 2 deletions docs/monitor/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,9 +33,9 @@ Triggers send email or webhook notifications when specific events occur on your
- **Machine status**: alert when a machine part comes online or goes offline.
- **Log levels**: alert when error, warning, or info logs appear on a machine.

You configure notification frequency to control how often alerts fire, which helps prevent alert fatigue as your fleet grows. Notifications can go to specific email addresses, all machine owners, or a webhook endpoint that integrates with services like PagerDuty, Twilio, or Zapier.
You configure the alert frequency to control how often alerts fire, which helps prevent alert fatigue as your fleet grows. Notifications can go to specific email addresses, all machine owners, or a webhook endpoint that integrates with services like PagerDuty, Twilio, or Zapier.

For alerts based on data sync events, see [Trigger on data events](/data/trigger-on-data/). For alerts based on ML model detections, see [Alert on detections](/vision/alert-on-detections/).
For alerts based on data sync events, see [Trigger on data events](/data/trigger-on-data/). For alerts based on ML model detections, see [Alert on detections](/vision/object-detection/alert-on-detections/).

See [Set up alerts](/monitor/alert/).

Expand Down
4 changes: 2 additions & 2 deletions docs/operate/hello-world/building.md
Original file line number Diff line number Diff line change
Expand Up @@ -306,7 +306,7 @@ We recommend consulting the docs for each service that might be helpful for your
Then configure the service and test it.

- [**Data management**](/data/capture-sync/capture-and-sync-data/): Capture, store, and sync data
- [**Vision**](/vision/detect/#using-a-vision-service) and [**ML model**](/vision/configure/): Detect objects, classify images, or track movement in camera streams
- [**Vision**](/vision/object-detection/detect/#using-a-vision-service) and [**ML model**](/vision/configure/): Detect objects, classify images, or track movement in camera streams
- [**Motion**](/motion-planning/) and [**Frame system**](/motion-planning/frame-system/): Plan and execute complex movements
- [**Navigation**](/operate/reference/services/navigation/): Help machines move autonomously
- [**SLAM (Simultaneous Localization and Mapping)**](/operate/reference/services/slam/): Create maps of surroundings and locate machines within those maps
Expand All @@ -317,7 +317,7 @@ If you cannot find suitable services, skip the service for now.
You can search all the available services in the Viam web UI when adding resources.

**Wood sanding project:** To find a suitable vision service, you'd look through the available vision services.
There is a [`color_detector` vision service](/operate/reference/services/vision/color_detector/) which you could use to detect the pencil color on wood.
There is a [`color_detector` vision service](/reference/services/vision/color_detector/) which you could use to detect the pencil color on wood.
You could also look for or create a machine learning model that recognizes drawings on wood.

The motion service is built in.
Expand Down
2 changes: 1 addition & 1 deletion docs/operate/reference/components/camera/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ For general configuration, development, and usage info, see:
You can also use the camera component with the following services:

- [Data management service](/data/capture-sync/capture-and-sync-data/): To capture and sync the camera's data
- [Vision service](/operate/reference/services/vision/): To use computer vision to interpret the camera stream
- [Vision service](/reference/services/vision/): To use computer vision to interpret the camera stream
- [SLAM service](/operate/reference/services/slam/): For mapping (with a depth camera)

{{% hiddencontent %}}
Expand Down
10 changes: 5 additions & 5 deletions docs/operate/reference/components/camera/transform.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ The transformations are applied in the order they are written in the `pipeline`.
{{% tab name="Config Builder" %}}

Navigate to the **CONFIGURE** tab of your machine's page.
Click the **+** icon next to your machine part in the left-hand menu and select **Component or service**.
Click the **+** icon next to your machine part in the left-hand menu and select **Configuration block**.
Select the `camera` type, then select the `transform` model.
Enter a name or use the suggested name for your camera and click **Create**.

Expand Down Expand Up @@ -119,7 +119,7 @@ The following are the transformation objects available for the `pipeline`:

### Classifications

Classifications overlay text from the `GetClassifications` method of the [vision service](/operate/reference/services/vision/) onto the image.
Classifications overlay text from the `GetClassifications` method of the [vision service](/reference/services/vision/) onto the image.

```json {class="line-numbers linkable-line-numbers"}
{
Expand All @@ -140,7 +140,7 @@ Classifications overlay text from the `GetClassifications` method of the [vision

**Attributes:**

- `classifier_name`: The name of the classifier in the [vision service](/operate/reference/services/vision/).
- `classifier_name`: The name of the classifier in the [vision service](/reference/services/vision/).
- `confidence_threshold`: The threshold above which to display classifications.
- `max_classifications`: _Optional_. The maximum number of classifications to display on the camera stream at any given time. Default: `1`.
- `valid_labels`: _Optional_. An array of labels that you to see detections for on the camera stream. If not specified, all labels from the classifier are used.
Expand Down Expand Up @@ -244,7 +244,7 @@ Use the formula `(X / <image width>, Y / <image height>)`.

### Detections

The Detections transform takes the input image and overlays the detections from a given detector configured within the [vision service](/operate/reference/services/vision/).
The Detections transform takes the input image and overlays the detections from a given detector configured within the [vision service](/reference/services/vision/).

```json {class="line-numbers linkable-line-numbers"}
{
Expand All @@ -264,7 +264,7 @@ The Detections transform takes the input image and overlays the detections from

**Attributes:**

- `detector_name`: The name of the detector configured in the [vision service](/operate/reference/services/vision/).
- `detector_name`: The name of the detector configured in the [vision service](/reference/services/vision/).
- `confidence_threshold`: Specify to only display detections above the specified threshold (decimal between 0 and 1).
- `valid_labels`: _Optional_. An array of labels that you to see detections for on the camera stream. If not specified, all labels from the classifier are used.

Expand Down
4 changes: 2 additions & 2 deletions docs/operate/reference/services/frame-system/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ updated: "2024-10-18"
# SMEs: Peter L, Gautham, Bijan
---

The frame system is the basis for some of Viam's other services, like [motion](/operate/reference/services/motion/) and [vision](/operate/reference/services/vision/).
The frame system is the basis for some of Viam's other services, like [motion](/operate/reference/services/motion/) and [vision](/reference/services/vision/).
It stores the required contextual information to use the position and orientation readings returned by some components.

It is a mostly static system for storing the "reference frame" of each component of a machine within a coordinate system configured by the user.
Expand Down Expand Up @@ -194,7 +194,7 @@ _Additional transforms_ exist to help the frame system determine the location of
### Example of additional transforms

Imagine you are using a wall-mounted [camera](/operate/reference/components/camera/) to find objects near your arm.
You can use the [vision service](/operate/reference/services/vision/) with the camera to detect objects and provide the poses of the objects with respect to the camera's reference frame.
You can use the [vision service](/reference/services/vision/) with the camera to detect objects and provide the poses of the objects with respect to the camera's reference frame.
The camera is fixed with respect to the `world` reference frame.

If the camera finds an apple or an orange, you can command the arm to move to the detected fruit's location by providing an additional transform that contains the detected pose of the fruit with respect to the camera that performed the detection.
Expand Down
4 changes: 2 additions & 2 deletions docs/operate/reference/services/navigation/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ Then, configure the service:
{{% tab name="Config Builder" %}}

Navigate to the **CONFIGURE** tab of your machine's page.
Click the **+** icon next to your machine part in the left-hand menu and select **Component or service**.
Click the **+** icon next to your machine part in the left-hand menu and select **Configuration block**.
Select the `navigation` type.
Enter a name or use the suggested name for your service and click **Create**.

Expand Down Expand Up @@ -169,7 +169,7 @@ The following attributes are available for `Navigation` services:

The [frame system service](/operate/reference/services/frame-system/) is an internally managed and mostly static system for storing the reference frame of each component of a machine within a coordinate system configured by the user.

It stores the required contextual information for Viam's services like [Motion](/operate/reference/services/motion/) and [Vision](/operate/reference/services/vision/) to use the position and orientation readings returned by components like [movement sensors](/operate/reference/components/movement-sensor/).
It stores the required contextual information for Viam's services like [Motion](/operate/reference/services/motion/) and [Vision](/reference/services/vision/) to use the position and orientation readings returned by components like [movement sensors](/operate/reference/components/movement-sensor/).

{{% /alert %}}

Expand Down
8 changes: 0 additions & 8 deletions docs/operate/reference/services/vision/_index.md

This file was deleted.

Loading
Loading