AI chat sidebar: Add Ethical AI Rating badge to EU notice#15614
Open
pedropintosilva wants to merge 2 commits intomainfrom
Open
AI chat sidebar: Add Ethical AI Rating badge to EU notice#15614pedropintosilva wants to merge 2 commits intomainfrom
pedropintosilva wants to merge 2 commits intomainfrom
Conversation
Inspired by Nextcloud's Ethical AI Rating, a small colored dot next to the disclosure line tells users how "open" the configured AI service is. Hovering anywhere on the notice reveals the score and the factors used to compute it. Rating scale: A (best) to D (poor), with U for unknown combinations. Placed next to the EU AI Act notice rather than in the header because both elements relate to where the generated content comes from. Ratings reflect public licensing info as of this commits date: - OpenAI GPT / o-series: proprietary (openai.com/policies) - Anthropic Claude: proprietary (anthropic.com/legal) - Google Gemini: proprietary (ai.google.dev/terms) - Mistral: mixed - 7B/Mixtral Apache 2.0, Large/Medium proprietary (mistral.ai/technology/#models) - Meta Llama: open weights, custom Llama Community License (llama.meta.com/llama-downloads) - Alibaba Qwen: Apache 2.0 for recent versions (huggingface.co/Qwen) - Ai2 OLMo: fully open incl. training data (allenai.org/olmo) - EleutherAI Pythia: fully open incl. training data (github.com/EleutherAI/pythia) What it could be better: The rating is currently derived from the hard coded model-name regex table (gpt/claude/gemini -> D, mistral -> C, llama/qwen -> B, olmo/pythia -> A). A better/proper way would use provider and not just the model, but requires sending the provider URL to the client, and we don't have that at the moment. And have a proper license check, selfhosted(?). If this ^ is done, then we can use commented tooltip. Signed-off-by: Pedro Pinto Silva <pedro.silva@collabora.com> Change-Id: Ia978589642cd4883699a3ca82a2f7afcac168591
06e9ff6 to
94a22c6
Compare
Contributor
Author
|
@Rash419 please review it and feel free to add commits to this PR. |
8023d65 to
827081d
Compare
…vider Move the Ethical AI Rating computation from a client-side regex table to the server, where both the model name and provider URL are available. This enables provider-aware ratings: the same open model gets a better rating when self-hosted than when accessed through a cloud provider. Rating logic: - A (green, Best): open model + self-hosted or custom domain - B (orange, Good): open model + known cloud provider - C (red, Poor): proprietary model (gpt-, claude-, gemini-, o-series) - U (gray, Unknown): no model configured Known cloud providers: api.openai.com, api.anthropic.com, generativelanguage.googleapis.com, api.mistral.ai, openrouter.ai, api.deepseek.com, api.together.xyz, api.fireworks.ai. Everything else is assumed self-hosted, since custom domains typically point to the user's own infrastructure. The server sends the rating as aiEthicalRating alongside aiModelName. The client-side MODEL_RATINGS regex table and getEthicalRating() are removed in favor of reading the server-provided value. Signed-off-by: Rashesh Padia <rashesh.padia@collabora.com> Change-Id: I67ba60865ed1d3d29cf2985defedb6201c70b710
827081d to
7edee0e
Compare
Contributor
Author
|
Thanks @Rash419 , here is something I have noticed:
Let's keep the discussion in https://gerrit.collaboraoffice.com/c/online/+/1646/1 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Inspired by Nextcloud's Ethical AI Rating, a small colored dot next to
the disclosure line tells users how "open" the configured AI service
is. Hovering anywhere on the notice reveals the score and the factors
used to compute it.
Rating scale: A (best) to D (poor), with U for unknown combinations.
Placed next to the EU AI Act notice rather than in the header because
both elements relate to where the generated content comes from.
What it could be better:
The rating is currently derived from the hard coded model-name regex
table (gpt/claude/gemini -> D, mistral -> C, llama/qwen -> B, olmo/pythia
-> A). A better/proper way would use provider and not just the model,
but requires sending the provider URL to the client, and we don't have
that at the moment.
Maybe we could think if we should then later on use icons (akin to how
we have for lc_serverauditerror.svg, lc_serverauditok.svg, etc).
Signed-off-by: Pedro Pinto Silva pedro.silva@collabora.com
Change-Id: Ia978589642cd4883699a3ca82a2f7afcac168591