RV1126B Model Conversion Toolkit Test DONE. Do you think RV1126B will be a good consideration for the main control of our next-gen AI Camera? #64
jennazhuxj
started this conversation in
Show and tell
Replies: 1 comment
-
|
Do you have image RV1126B? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Yes! We take Rockchip as a serious consideration for our next-generation AI Camera. And we just tested its latest chip: RV1126B, whose materials are still not much visible in the market.
As you may know, in the hands-on practice of "How to make an AI camera", the most challenging part is model conversion - after all, getting a model trained in PyTorch/TensorFlow to run in real-time on an RV1126B chip is like forcing an algorithm accustomed to five-star hotels to live in a compact embedded space, where precision crashes and inference delays can be brutal lessons.
Our real-world tests have provided the answer: with the RKNN-Toolkit, we achieved an inference speed of 31.4ms for the YOLOv5s model on RV1126B (solidly maintaining 30fps video streaming). Even more impressively, this toolkit directly supports the full range of models from MobileNetV2 to YOLOv11, and can seamlessly adapt to advanced requirements like pose estimation and image segmentation!
We are updating a deeper and more comprehensive test on the RV1126B and more on HOW TO BUILD AN AI CAMERA. Check the details and keep an eye on the update on our Hackaday log: https://hackaday.io/project/202943-peek-under-the-hood-how-to-build-an-ai-camera.
Beta Was this translation helpful? Give feedback.
All reactions