Conversation
Fixes openwisp#274 (cherry picked from commit 4a6ae39)
Fixes openwisp#274 (cherry picked from commit d02db9c)
Fixes openwisp#274 (cherry picked from commit 0aa3c6d)
Fixes openwisp#274 (cherry picked from commit cfc9b58)
Fixes openwisp#274 (cherry picked from commit 61300a6)
Fixes openwisp#274 (cherry picked from commit bff44f1)
Fixes openwisp#274 (cherry picked from commit 3859434)
Applied openwisp-qa-format only. No functional changes.
📝 WalkthroughWalkthroughThis pull request introduces multi-database backend support by adding InfluxDB 2.x alongside the existing InfluxDB 1.x implementation. Changes include a complete InfluxDB 2.x client module with Flux query templates, conditional backend selection logic in configuration and services, removal of hardcoded query exports from the common database abstraction layer (delegating to backend-specific implementations), Docker Compose service for InfluxDB 2.x, updated test infrastructure to support both backends, and related task/utility refactoring to handle backend-specific operations. No breaking changes to public APIs; instead, internal implementations are redistributed to backend-specific modules. Sequence Diagram(s)sequenceDiagram
participant App as Application Layer
participant Abstraction as Abstraction Layer<br/>(timeseries_db)
participant Config as Configuration<br/>(BACKEND selection)
participant InfluxDB1 as InfluxDB 1.x<br/>Client
participant InfluxDB2 as InfluxDB 2.x<br/>Client
App->>Config: Check TIMESERIES_DB[BACKEND]
alt Backend is influxdb
Config->>Abstraction: Load InfluxDB 1.x client
Abstraction->>InfluxDB1: Initialize
App->>Abstraction: read(key, fields, tags)
Abstraction->>InfluxDB1: query() with InfluxQL
InfluxDB1-->>Abstraction: Points data
Abstraction-->>App: Parsed results
else Backend is influxdb2
Config->>Abstraction: Load InfluxDB 2.x client
Abstraction->>InfluxDB2: Initialize
App->>Abstraction: read(key, fields, tags)
Abstraction->>InfluxDB2: execute_query() with Flux
InfluxDB2-->>Abstraction: Records data
Abstraction-->>App: Parsed results
end
sequenceDiagram
participant Chart as Chart/Metric
participant Abstraction as Abstraction Layer<br/>(timeseries_db)
participant Backend as Backend Client
participant Query as Query Templates<br/>(backend-specific)
Chart->>Abstraction: get_query(chart_type, params, ...)
Abstraction->>Backend: get_query(...) [dispatched]
Backend->>Query: Load chart_query[chart_type]
Query-->>Backend: Query template
Backend->>Backend: Substitute placeholders<br/>(time, measurement, fields, tags)
Backend-->>Abstraction: Finalized query
Abstraction-->>Chart: Query string
Chart->>Abstraction: execute query
Abstraction->>Backend: execute_query() or query()
Backend-->>Abstraction: Result set
Abstraction-->>Chart: Formatted traces & summary
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Caution Pre-merge checks failedPlease resolve all errors before merging. Addressing warnings is optional.
❌ Failed checks (1 error, 2 warnings)
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Dependency Conflict During InstallationHello @Baboux98, There is a dependency conflict between Fix: Please check the |
There was a problem hiding this comment.
Actionable comments posted: 36
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
openwisp_monitoring/db/backends/influxdb/client.py (1)
536-570:⚠️ Potential issue | 🟠 MajorAvoid reusing a mutated params dict for the second top-fields query.
get_query()mutatesparamsthrough_clean_params(). Whenget_fields=False, the secondget_query()receives already-cleaned values, which can corrupt{end_date}and tag filters.🐛 Proposed fix
q = self.get_query( query=q, - params=params, + params=params.copy(), chart_type=chart_type, group_map=group_map, @@ query = self.get_query( query=query, - params=params, + params=params.copy(), chart_type=chart_type, group_map=group_map,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@openwisp_monitoring/db/backends/influxdb/client.py` around lines 536 - 570, The code reuses and passes the same params dict into get_query twice, but get_query mutates params via _clean_params(), corrupting values for the second/top-fields query (affecting {end_date} and tag filters); to fix, pass a fresh copy of params to each get_query call (e.g., dict(params) or params.copy() / copy.deepcopy(params) as appropriate) when building the summary query (fields=["SUM(*)"]) and again when building the second query with top_fields so the original params are not mutated; update calls to get_query in this block to use copied params instead of the original params variable.openwisp_monitoring/monitoring/tests/__init__.py (1)
302-328:⚠️ Potential issue | 🔴 CriticalDebug artifact and duplicate
super().setUpClass()call.Two issues in
setUpClass:
- Line 304 contains a leftover
# import pdb; pdb.set_trace()debug comment — please remove before merging.super().setUpClass()is invoked twice (lines 322 and 328). This re-runs the parent's class-level setup (organization fixtures, database setup, etc.), which at best does redundant work and at worst can cause duplicate fixture creation or ordering issues. Only one call should remain — likely before chart/metric registration so that tests see registered metrics against an initialized parent.🔧 Proposed fix
`@classmethod` def setUpClass(cls): - # import pdb; pdb.set_trace() # By default timeseries_db.db shall connect to the database # defined in settings when apps are loaded. We don't want that while testing if "NAME" in cls.TIMESERIES_DB: # InfluxDB 1.8 configuration timeseries_db.db_name = cls.TEST_DB del timeseries_db.db del timeseries_db.dbs else: # InfluxDB 2.0 configuration timeseries_db.bucket = cls.TEST_BUCKET timeseries_db.org = cls.TEST_ORG timeseries_db.token = cls.TEST_TOKEN - # Create the test database or bucket timeseries_db.create_database() - - # Rest of the setup code... super().setUpClass() for key, value in metrics.items(): register_metric(key, value) for key, value in charts.items(): register_chart(key, value) cache.clear() - super().setUpClass()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@openwisp_monitoring/monitoring/tests/__init__.py` around lines 302 - 328, Remove the leftover debug comment "# import pdb; pdb.set_trace()" from setUpClass and eliminate the duplicate super().setUpClass() call so the parent setup runs only once; call super().setUpClass() once before registering metrics/charts (i.e., ensure setUpClass on the class invokes super().setUpClass() prior to the register_metric/register_chart loops and cache.clear), leaving the timeseries_db configuration and timeseries_db.create_database() logic intact.openwisp_monitoring/monitoring/tasks.py (2)
100-107:⚠️ Potential issue | 🟠 Major
_timeseries_batch_writestill assumes InfluxDB 1.x.This function was not updated alongside
_timeseries_write: it accessestimeseries_db.use_udpunconditionally, which will raiseAttributeErrorwhen the InfluxDB 2.x client (which has nouse_udpattribute) is active. Usegetattr(timeseries_db, "use_udp", False)for symmetry with the updated_timeseries_write.🔧 Proposed fix
- if timeseries_db.use_udp: + if getattr(timeseries_db, "use_udp", False): timeseries_batch_write(data=data)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@openwisp_monitoring/monitoring/tasks.py` around lines 100 - 107, The function _timeseries_batch_write still accesses timeseries_db.use_udp directly which will raise AttributeError for InfluxDB 2.x clients; change the check to use getattr(timeseries_db, "use_udp", False) (same pattern as _timeseries_write) so UDP mode is only assumed when present, and keep the existing behavior: if UDP write use timeseries_batch_write(data) synchronously, otherwise convert each item["metric"] to item["metric"].pk and call timeseries_batch_write.delay(data=data). Ensure references to timeseries_db, _timeseries_batch_write, and timeseries_batch_write (and its .delay) are updated accordingly.
127-138: 🛠️ Refactor suggestion | 🟠 MajorUse the configured backend to pick the migration, not a separate env var.
The rest of the codebase derives the active backend from
settings.TIMESERIES_DATABASE["BACKEND"](seedelete_timeseriesat line 112). Usingos.environ["USE_INFLUXDB2"]here creates a second source of truth that can drift from the Django setting (e.g., a deployment that setsTIMESERIES_DATABASEdirectly without the env var, or vice versa) and runs the wrong migration. Branch on the backend name instead.🔧 Proposed fix
- if os.environ.get("USE_INFLUXDB2", "False") == "True": + backend = settings.TIMESERIES_DATABASE["BACKEND"] + if backend == "openwisp_monitoring.db.backends.influxdb2": influxdb2_migration.migrate_influxdb_structure() else: influxdb_migration.migrate_influxdb_structure()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@openwisp_monitoring/monitoring/tasks.py` around lines 127 - 138, The migrate_timeseries_database function currently branches on the USE_INFLUXDB2 env var; change it to read the active backend from Django settings.TIMESERIES_DATABASE["BACKEND"] (falling back to a sensible default like "influxdb" if missing) and use that value to decide whether to call influxdb2_migration.migrate_influxdb_structure() or influxdb_migration.migrate_influxdb_structure(); update the branching logic in migrate_timeseries_database (referencing the function name) to mirror how delete_timeseries derives the backend from settings to ensure a single source of truth.openwisp_monitoring/device/base/models.py (1)
29-37:⚠️ Potential issue | 🔴 CriticalMissing
check_settingsimport causesNameErroringet_active_metrics.The import of
check_settingsis missing from the file, butget_active_metricsat line 396 referencescheck_settings.CHECK_LIST. CallingAbstractDeviceMonitoring.get_active_metrics()will raiseNameError: name 'check_settings' is not definedon the first call. Subsequent calls will not error due to the_active_metricscache. Either restore the import or migrate the reference to the appropriate replacement.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@openwisp_monitoring/device/base/models.py` around lines 29 - 37, The get_active_metrics method on AbstractDeviceMonitoring references check_settings.CHECK_LIST but check_settings is not imported, causing a NameError; fix by adding the missing import for check_settings at the top of the module (so check_settings.CHECK_LIST resolves in AbstractDeviceMonitoring.get_active_metrics), or if the project renamed it, update the reference in get_active_metrics to the new symbol. Ensure the import matches the package structure and use the exact symbol name check_settings so AbstractDeviceMonitoring.get_active_metrics and any caching of _active_metrics work without error.openwisp_monitoring/monitoring/base/models.py (1)
856-912:⚠️ Potential issue | 🔴 CriticalCritical: Three conflicting definitions of
_roundinAbstractChart.
AbstractChartnow defines_roundthree times (lines 856–868, 881–886, and the pre-existing@staticmethodat 906–912). Only the last definition in class-body order survives — everything else is dead code that confuses readers and invites subtle bugs on any future edit (e.g., someone fixes the instance-method version expecting it to run).Additionally, the first
_round(856–868) usesround(float_value, decimal_places)while the staticmethod (906–912) increases precision for small values; those are not equivalent behaviors, so whoever wrote the new ones likely didn't realize the staticmethod was still shadowing them.Recommended resolution: decide on the intended rounding semantics, keep exactly one
_round(staticmethod on the class or standalone helper), and delete the two dead versions. The same applies to_safe_round(881-886 depends on the dead_round).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@openwisp_monitoring/monitoring/base/models.py` around lines 856 - 912, AbstractChart defines _round three times (two instance methods and one `@staticmethod`), leaving only the last staticmethod active and making the other two and any code depending on them (_safe_round) dead/confusing; remove the duplicate instance-method definitions and keep a single canonical _round (choose either the staticmethod implementation at _round (staticmethod) or the intended instance-method behavior), update _safe_round to call that single _round implementation (e.g., self._round or AbstractChart._round if static) and delete the extra obsolete _round definitions so there is exactly one rounding function and no shadowed dead code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docker-compose.yml`:
- Around line 37-42: Add a deterministic admin token by setting
DOCKER_INFLUXDB_INIT_ADMIN_TOKEN in the influxdb2 service environment and wire
that same token into your app config (TIMESERIES_DATABASE["TOKEN"]) so the stack
is reproducible; also update the monitoring service to depend on influxdb2
(replace depends_on: influxdb) and add an environment variable (e.g.,
INFLUXDB_TOKEN or TIMESERIES_DATABASE_TOKEN) populated with the same admin token
so monitoring can authenticate against InfluxDB 2.x.
In `@openwisp_monitoring/db/backends/__init__.py`:
- Around line 35-44: The backend detection is fragile because it uses `"2" in
TIMESERIES_DB["BACKEND"]`; change it to an explicit backend-name match: extract
the backend basename (e.g. backend_name =
TIMESERIES_DB["BACKEND"].split(".")[-1]) and compare it to the exact InfluxDB 2
backend identifier (e.g. 'influxdb_v2' or the concrete module basename your
project uses) instead of substring matching, then run the TOKEN/ORG/BUCKET
checks only when that equality matches; otherwise run the InfluxDB 1.x
USER/PASSWORD/NAME checks.
In `@openwisp_monitoring/db/backends/influxdb/client.py`:
- Around line 349-350: The get_list_query function currently accepts **kwargs
but doesn't pass them to self.query, so callers' options like database, params,
or expected_response_code are ignored; update get_list_query (in
openwisp_monitoring.db.backends.influxdb.client.Client) to call
self.query(query, precision=precision, **kwargs) so all keyword args are
forwarded to the query method and behave as intended.
- Around line 572-576: default_chart_query currently builds a WHERE clause but
never incorporates the {end_date} fragment prepared by _clean_params, so
explicit end-date filters are ignored; modify default_chart_query (the function
named default_chart_query) to include the {end_date} placeholder in the returned
query string (e.g., append or insert {end_date} into the WHERE clause alongside
or before the tags fragment) while preserving the existing tag handling for
content_type and object_id so queries produced will include both tag filters and
the end_date produced by _clean_params.
In `@openwisp_monitoring/db/backends/influxdb2/client.py`:
- Around line 510-511: Remove the stray TODO comment and the commented-out call
to bucket_api.find_bucket_by_name("openwisp") in the InfluxDB client module;
either delete these lines entirely or replace them with a short comment
referencing a created issue/PR number that tracks the remaining work (e.g., "See
issue `#1234`"), so no unfinished TODOs remain in the codebase and the intent is
tracked.
- Around line 53-65: The constructor __init__ builds self.url with a hardcoded
"http://" and creates InfluxDBClient, write_api and query_api eagerly without a
timeout or scheme handling and may leak resources; update __init__ to honor a
configurable scheme (e.g. TIMESERIES_DB["SCHEME"] or accept url if passed), pass
a configurable timeout into InfluxDBClient (expose a timeout param/default from
settings), and avoid eagerly creating write_api/query_api per instance if
connection reuse/caching is expected—either lazy-initialize them on first use or
ensure close() in close() properly cleans up these APIs; reference the __init__
method, self.url, TIMESERIES_DB, InfluxDBClient, write_api, query_api and
close() when making these changes.
- Around line 392-410: The _fields method currently references an undefined
attribute self._fields_regex and may leave selected_fields uninitialized,
causing AttributeError/UnboundLocalError; fix by either (A) implementing
Flux-compatible field-parsing: add a class attribute _fields_regex (or compute
it) and adapt __transform_field and _get_top_fields to Flux syntax so _fields
can parse queries correctly, ensuring selected_fields is always set before use,
or (B) remove/replace usages of _fields/_get_top_fields across the InfluxDB2
backend and the call sites (e.g., where _fields is invoked) so Flux path doesn't
rely on legacy InfluxQL helpers; update references to the unique symbols
_fields, _fields_regex, __transform_field, and _get_top_fields accordingly.
- Around line 419-450: The _get_top_fields method signature in client.py must
accept the caller's default_query kwarg to avoid a TypeError; update the
_get_top_fields signature to include default_query=None (or default_query) and
either use it when query is None or explicitly ignore it, ensuring existing
behavior remains unchanged; keep the rest of the parameters (query, params,
chart_type, group_map, number, time, timezone) and leave the Flux query
construction in _get_top_fields (function name: _get_top_fields) as-is, so
callers like AbstractChart.get_top_fields that pass default_query work without
changes.
- Around line 120-175: Replace all bare print(...) calls in methods write (the
block that writes a single Point), batch_write, query, and _parse_query_result
with appropriate logger calls (use logger.debug for trace/status messages like
"Writing point..." / "Successfully wrote point..." / "Executing query" /
"Parsing query result" / "Parsed result" and logger.error for error messages).
In the except blocks, narrow the exception catch to the client-specific
exception (use self.client_error or the specific Influx client exception class)
instead of plain Exception, log the error with logger.error including the
exception details, and re-raise the exception where the caller should handle
failures (do not silently swallow). Ensure all references to prints in these
methods are replaced and logging levels follow the guidance above.
- Around line 66-74: The no-op methods create_database, drop_database and
create_or_alter_retention_policy break callers/tests that expect bucket
creation/deletion and retention updates; change create_database to call
create_bucket(self.bucket) (creating the bucket when missing), change
drop_database to call drop_bucket(self.bucket) (deleting the bucket), and
implement create_or_alter_retention_policy to call the bucket retention update
routine (e.g., the existing create_bucket/update_bucket method or API call that
sets retention) and surface/log errors so callers/tests see failures instead of
silent no-ops; ensure you reuse the existing create_bucket/drop_bucket/update
logic and preserve the 1.x contract for callers.
- Around line 452-466: default_chart_query currently treats tags as iterable and
uses double-brace placeholders, causing TypeError and incorrect formatting;
update default_chart_query to first normalize/guard the tags param (e.g., if not
isinstance(tags, dict): tags = {}) so membership checks like 'end_date' in tags
work safely, then switch the query placeholders from double-brace "{{...}}" to
single-brace "{...}" to match the codebase's .format() usage, and keep the
conditional block that adds content_type/object_id filters only when tags
contains those keys; finally, confirm this change aligns with
_clean_params/get_query formatting expectations invoked by
AbstractChart._default_query.
- Around line 111-140: The write() method currently catches all exceptions,
prints messages, and swallows errors; replace print() calls with
logger.debug/info for success and logger.error for failures, and in the except
block wrap/raise a TimeseriesWriteException (preserving the original exception
as context) so callers can see failures and trigger retries; apply the same
pattern to batch_write() (which should use logger instead of print and raise
TimeseriesWriteException when write_api.write(bucket=self.bucket, org=self.org,
record=...) fails), ensuring you reference the write_api.write call in both
write and batch_write and re-raise the error after logging.
- Around line 126-136: The batch_write method incorrectly calls
Point.tag(**tags) and .field(**values) which the InfluxDB client API does not
support; update batch_write in
openwisp_monitoring/db/backends/influxdb2/client.py (the function batch_write
that uses _get_timestamp and builds Point objects) to iterate over
data.get("tags", {}) and call point.tag(key, value) for each pair, and similarly
iterate over data.get("values", {}) and call point.field(field, value) for each
pair before appending the Point; also handle empty/missing dicts by skipping the
loops so points are still created with timestamp/name.
- Around line 294-306: The _is_aggregate method in
openwisp_monitoring.db.backends.influxdb2.client.py doesn't detect Flux
aggregateWindow patterns so aggregate queries like aggregateWindow(fn: ...) or
|> mean(...) are missed; update _is_aggregate(q) to also check for substrings
like "AGGREGATEWINDOW(" and "AGGREGATEWINDOW(fn:" and patterns where an
aggregate follows the pipe operator (e.g., "|> %s(" or "|> %s |" for each word
in _AGGREGATE) in addition to the existing checks, add unit tests covering Flux
queries using aggregateWindow and pipe-invoked aggregates, and review the
_FORBIDDEN list (specifically "drop") to ensure drop(columns: ...) is not
treated as a forbidden use in valid Flux queries.
In `@openwisp_monitoring/db/backends/influxdb2/queries.py`:
- Line 1: Add a short module-level comment at the top of the file documenting
that the constants (e.g., chart_query) are Flux query templates and specifying
the placeholder contract: which placeholders are expected ({bucket}, {key},
{time}, {end_date}) and how tag filters should be formatted and injected;
briefly note that these are templates (not final queries) and where in the
codebase they are populated so future readers understand why the template
complexity is acceptable and how to safely substitute values.
In `@openwisp_monitoring/db/backends/influxdb2/tests.py`:
- Around line 153-177: The failing tests are asserting InfluxQL fragments but
this backend emits Flux pipelines; update tests test_get_custom_query and
test_get_query_fields_function to validate Flux syntax instead of SQL-like
strings: use the chart helper to build c and call c.get_query (and
client._is_aggregate for aggregation checks) and assert presence of Flux
pipeline elements (e.g., "from(", "|> range(", "|> filter(" or the correct field
mapping in Flux) or remove the obsolete tests; also ensure test_is_aggregate_*
use client._is_aggregate against Flux queries rather than expecting InfluxQL
semantics.
- Around line 108-125: The tests call buckets_api but
DatabaseClient.create_database and .drop_database are currently no-ops;
implement these methods to either delegate to the existing
create_bucket/drop_bucket logic or call
influxdb_client.InfluxDBClient().buckets_api() to find and create/delete the
bucket (use the same semantics as create_bucket and drop_bucket), ensuring
create_database calls buckets_api().find_bucket_by_name("mybucket") and
create_bucket when missing, and drop_database calls
find_bucket_by_name("mybucket") and delete_bucket when present; update the
DatabaseClient class methods create_database and drop_database accordingly.
- Around line 26-31: Replace the real InfluxDB token in the test DatabaseClient
initialization with a non‑sensitive placeholder (e.g., set the token parameter
to "test-token") to avoid committing credentials; update the token argument in
the DatabaseClient(...) call in
openwisp_monitoring/db/backends/influxdb2/tests.py and ensure any other tests
use the same placeholder instead of the real-looking token.
- Around line 47-59: The test test_write asserts the wrong measurement: it
writes "test_write" but checks call_args["record"]["measurement"] == "ping",
causing a failing assertion; update the assertion in test_write to expect
"test_write" (or change the write call to use "ping") so the expected
measurement matches the actual one, and ensure you adjust the assertion
referencing mock_influxdb_client.return_value.write_api.write call_args
accordingly; also consider the earlier DatabaseClient.write swallowing
exceptions which may hide failures when running the real path.
- Around line 137-140: The test expectation is missing the trailing 'Z' produced
by DatabaseClient._get_timestamp; update the test in test_get_timestamp to
expect "2023-01-01T12:00:00.000000Z" (or alternatively remove the 'Z' from
DatabaseClient._get_timestamp if ISO UTC design changed) so the expected value
matches the actual output of DatabaseClient._get_timestamp in client.py.
- Around line 259-274: Rename the test method ping_write_microseconds_precision
to start with test_ (e.g., test_ping_write_microseconds_precision) so unittest
discovers and runs it, and update the two expected timestamp assertions for
mock_write_api.write (call_args_1["record"]["time"] and
call_args_2["record"]["time"]) to include the trailing "Z" (e.g.,
"2020-07-31T22:05:47.235142Z" and "2020-07-31T22:05:47.235152Z") to match the
ISO8601 UTC format produced by the code under test.
In `@openwisp_monitoring/device/base/models.py`:
- Line 374: The condition checking self.status == "" never matches because
status is a StatusField constrained to STATUS choices; update the condition in
the block that clears management IP (the code guarded by
app_settings.AUTO_CLEAR_MANAGEMENT_IP) to use the correct offline status check
(restore the previous self.status == "critical" or otherwise test for the
offline/status value used to indicate an offline device from STATUS), and update
the preceding comment to accurately describe "when device is offline"; ensure
this change affects the same clearing logic that resets the management_ip field.
In `@openwisp_monitoring/device/settings.py`:
- Line 59: The constant AUTO_CLEAR_MANAGEMENT_IP is currently initialized with
get_settings_value("AUTO_CLEAR_MANAGEMENT_IP", False) but the docs state the
default is True; change the default value to True (i.e., use
get_settings_value("AUTO_CLEAR_MANAGEMENT_IP", True)) in settings.py so runtime
behavior matches README.rst and docs/user/settings.rst, and run tests/QA to
confirm no downstream breakage; alternatively, if you intentionally changed the
default, update README.rst and docs/user/settings.rst to document the breaking
change and include upgrade notes referencing AUTO_CLEAR_MANAGEMENT_IP and
get_settings_value so users know the new default.
In `@openwisp_monitoring/device/utils.py`:
- Around line 24-27: The _manage_retention_policy function currently ignores its
name and duration and always calls timeseries_db.create_bucket, which breaks
InfluxDB 1.x; update it to dispatch based on backend capability by using hasattr
on the timeseries_db to detect v2 vs v1: if timeseries_db has create_bucket
(InfluxDB 2.x) call create_bucket and pass retention rules derived from the
duration argument, otherwise (InfluxDB 1.x) call
create_or_alter_retention_policy(name, duration) so SHORT_RP and DEFAULT_RP use
their respective durations; reference the function _manage_retention_policy and
the methods create_bucket and create_or_alter_retention_policy when making
changes.
In `@openwisp_monitoring/monitoring/base/models.py`:
- Around line 8-9: Remove the duplicate import of dateutil.parser by keeping
only the aliased form (from dateutil.parser import parse as parse_date), update
the _parse_and_format_time function to call parse_date(...) instead of parse,
and delete the original from dateutil.parser import parse line so there is no
namespace collision with other tests or modules.
In `@openwisp_monitoring/monitoring/migrations/__init__.py`:
- Line 1: Remove the unrelated asyncio.log import from migrations/__init__.py
and restore model validation by calling chart.full_clean() (or
instance.full_clean() where the Chart instance is created) before saving
migration-created Chart objects so the Chart.clean() validation runs and invalid
configs/queries are rejected; ensure you call full_clean() prior to chart.save()
in the migration code path that constructs Chart instances.
In
`@openwisp_monitoring/monitoring/migrations/influxdb2/influxdb2_alter_structure_0006.py`:
- Around line 15-53: The requires_migration() check is inverted and too narrow:
update EXCLUDED_MEASUREMENTS to match the 1.x migration (remove measurements
that we intend to migrate, e.g., ensure "wifi_clients" and "traffic" are NOT in
EXCLUDED_MEASUREMENTS so migrate_wifi_clients() and migrate_traffic_data() will
be detected) and change the probe query range in requires_migration() from
"range(start: -1h)" to "range(start: -30d)" (or the same 30d window used by the
migration) so older data is detected; verify get_influxdb_client() is used
unchanged and run requires_migration() to confirm it returns True when target
measurements exist.
- Around line 34-41: The get_influxdb_client function currently contains
hardcoded bucket/org/token/url literals; replace that with the application's
TIMESERIES_DB configuration by importing settings (django.conf.settings) and
either pass settings.TIMESERIES_DB into DatabaseClient or call DatabaseClient()
so it uses its own settings-based loading instead of hardcoded values; remove
the literal secrets from the file and ensure the exposed InfluxDB token is
rotated outside the repo.
- Around line 70-88: The migrate_influxdb_data function is passing an invalid
keyword write_options=SYNCHRONOUS to write_api.write and is hardcoding
org/bucket; change the signature of migrate_influxdb_data to accept the
DatabaseClient instance (e.g., client) in addition to
query_api/write_api/read_query/measurement/tags, remove the write_options kwarg
from the write_api.write(...) call, and replace bucket="mybucket" and
org="myorg" with client.bucket and client.org respectively; leave write_options
usage only where the write_api is constructed (not in migrate_influxdb_data).
In `@openwisp_monitoring/monitoring/tasks.py`:
- Around line 63-80: The _timeseries_write implementation assigns func/args in
the UDP and fallback branches but never invokes them and incorrectly uses
hasattr(timeseries_db, "write") as the discriminator; update _timeseries_write
so: if hasattr(timeseries_db, "use_udp") and timeseries_db.use_udp, call the
local timeseries_write synchronously with the prepared args (use metric.pk
conversion as already done), elif hasattr(timeseries_db, "write") and
callable(timeseries_db.write) call timeseries_db.write(name, values, **kwargs)
and then call _metric_post_write(name, values, metric, check_threshold_kwargs,
**kwargs), else enqueue the delayed task by calling timeseries_write.delay with
the prepared args; refer to symbols _timeseries_write, timeseries_db,
timeseries_write, timeseries_write.delay and _metric_post_write when making the
changes.
- Around line 10-15: The module-level unconditional import of InfluxDB2Client
causes an unnecessary hard dependency; move the backend-specific import logic
into delete_timeseries (or better, use the already-instantiated timeseries_db)
so only the configured client is imported/used at runtime: inside
delete_timeseries check the configured backend or inspect timeseries_db's
class/type and then import/openly reference InfluxDB2Client or InfluxDB1Client
there (instead of the top-level imports), or simply call the appropriate delete
method on the existing timeseries_db instance to avoid importing influxdb_client
at module import time.
In `@openwisp_monitoring/monitoring/tests/__init__.py`:
- Around line 282-300: The class/module currently dereferences
TIMESERIES_DB['BUCKET'], ['ORG'], and ['TOKEN'] at import time which will
KeyError for InfluxDB 1.x; update the module to only access these keys inside
the INFLUXDB_BACKEND == "openwisp_monitoring.db.backends.influxdb2" branch (or
use TIMESERIES_DB.get('BUCKET'), .get('ORG'), .get('TOKEN') with safe defaults)
and ensure TEST_BUCKET, TEST_ORG and TEST_TOKEN are assigned either inside that
2.x branch or via safe .get() calls so that the INFLUXDB_BACKEND ==
"openwisp_monitoring.db.backends.influxdb" path (and the else path) does not
attempt to read missing keys; keep ORIGINAL_DB and TEST_DB assignments as-is but
move/guard any 2.x-only key usage accordingly.
In `@openwisp_monitoring/monitoring/tests/test_configuration.py`:
- Around line 38-47: The InfluxDB2 Flux template in the "influxdb2" entry uses
the "{measurement}" placeholder but the test registers metrics with a runtime
key named key (and field_name) so rendering will fail; update the Flux string in
the "influxdb2" mapping to use the runtime "{key}" placeholder for the
measurement (replace "{measurement}" with "{key}") so the test metric that
defines key and field_name renders correctly (look for the "influxdb2" tuple and
the filter using _measurement).
In `@requirements.txt`:
- Line 5: The dependency entry "openwisp-utils[rest] @
https://github.com/openwisp/openwisp-utils/tarball/master" is pinned to master
and must be aligned with the stable test branch; change the package spec to pin
the same stable version used in requirements-test.txt (e.g., replace the
"master" tarball URL with the corresponding "1.3" tag or a direct version spec
"openwisp-utils[rest]==1.3") so main requirements and requirements-test are
consistent.
In `@setup.py`:
- Around line 45-57: Update the package classifiers in setup.py to reflect the
actual project maturity and supported Python versions: change "Development
Status :: 3 - Alpha" to "Development Status :: 5 - Production/Stable" and
replace the Python classifiers "Programming Language :: Python :: 3.6" through
"3.10" with the CI-supported range "Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11", "Programming Language :: Python ::
3.12", and "Programming Language :: Python :: 3.13" in the classifiers list used
in the setup() metadata so the metadata accurately matches the repository CI
matrix.
In `@tests/openwisp2/settings.py`:
- Around line 42-49: The INFLUXDB_2x_DATABASE dict currently hardcodes sensitive
values; update it so TOKEN (and preferably ORG and BUCKET) are read from
environment variables using os.getenv with non-secret placeholder defaults
(e.g., os.getenv("INFLUXDB_TOKEN", "change-me-token"), os.getenv("INFLUXDB_ORG",
"myorg"), os.getenv("INFLUXDB_BUCKET", "mybucket")) instead of literal strings;
modify the TOKEN, ORG, and BUCKET entries in INFLUXDB_2x_DATABASE to use these
os.getenv calls so credentials are configurable per environment.
---
Outside diff comments:
In `@openwisp_monitoring/db/backends/influxdb/client.py`:
- Around line 536-570: The code reuses and passes the same params dict into
get_query twice, but get_query mutates params via _clean_params(), corrupting
values for the second/top-fields query (affecting {end_date} and tag filters);
to fix, pass a fresh copy of params to each get_query call (e.g., dict(params)
or params.copy() / copy.deepcopy(params) as appropriate) when building the
summary query (fields=["SUM(*)"]) and again when building the second query with
top_fields so the original params are not mutated; update calls to get_query in
this block to use copied params instead of the original params variable.
In `@openwisp_monitoring/device/base/models.py`:
- Around line 29-37: The get_active_metrics method on AbstractDeviceMonitoring
references check_settings.CHECK_LIST but check_settings is not imported, causing
a NameError; fix by adding the missing import for check_settings at the top of
the module (so check_settings.CHECK_LIST resolves in
AbstractDeviceMonitoring.get_active_metrics), or if the project renamed it,
update the reference in get_active_metrics to the new symbol. Ensure the import
matches the package structure and use the exact symbol name check_settings so
AbstractDeviceMonitoring.get_active_metrics and any caching of _active_metrics
work without error.
In `@openwisp_monitoring/monitoring/base/models.py`:
- Around line 856-912: AbstractChart defines _round three times (two instance
methods and one `@staticmethod`), leaving only the last staticmethod active and
making the other two and any code depending on them (_safe_round)
dead/confusing; remove the duplicate instance-method definitions and keep a
single canonical _round (choose either the staticmethod implementation at _round
(staticmethod) or the intended instance-method behavior), update _safe_round to
call that single _round implementation (e.g., self._round or
AbstractChart._round if static) and delete the extra obsolete _round definitions
so there is exactly one rounding function and no shadowed dead code.
In `@openwisp_monitoring/monitoring/tasks.py`:
- Around line 100-107: The function _timeseries_batch_write still accesses
timeseries_db.use_udp directly which will raise AttributeError for InfluxDB 2.x
clients; change the check to use getattr(timeseries_db, "use_udp", False) (same
pattern as _timeseries_write) so UDP mode is only assumed when present, and keep
the existing behavior: if UDP write use timeseries_batch_write(data)
synchronously, otherwise convert each item["metric"] to item["metric"].pk and
call timeseries_batch_write.delay(data=data). Ensure references to
timeseries_db, _timeseries_batch_write, and timeseries_batch_write (and its
.delay) are updated accordingly.
- Around line 127-138: The migrate_timeseries_database function currently
branches on the USE_INFLUXDB2 env var; change it to read the active backend from
Django settings.TIMESERIES_DATABASE["BACKEND"] (falling back to a sensible
default like "influxdb" if missing) and use that value to decide whether to call
influxdb2_migration.migrate_influxdb_structure() or
influxdb_migration.migrate_influxdb_structure(); update the branching logic in
migrate_timeseries_database (referencing the function name) to mirror how
delete_timeseries derives the backend from settings to ensure a single source of
truth.
In `@openwisp_monitoring/monitoring/tests/__init__.py`:
- Around line 302-328: Remove the leftover debug comment "# import pdb;
pdb.set_trace()" from setUpClass and eliminate the duplicate
super().setUpClass() call so the parent setup runs only once; call
super().setUpClass() once before registering metrics/charts (i.e., ensure
setUpClass on the class invokes super().setUpClass() prior to the
register_metric/register_chart loops and cache.clear), leaving the timeseries_db
configuration and timeseries_db.create_database() logic intact.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 3be0716a-01d4-47a0-adea-11bd2a6a87f6
📒 Files selected for processing (27)
DockerfileREADME.rstdocker-compose.ymlopenwisp_monitoring/db/__init__.pyopenwisp_monitoring/db/backends/__init__.pyopenwisp_monitoring/db/backends/influxdb/client.pyopenwisp_monitoring/db/backends/influxdb/queries.pyopenwisp_monitoring/db/backends/influxdb2/__init__.pyopenwisp_monitoring/db/backends/influxdb2/client.pyopenwisp_monitoring/db/backends/influxdb2/queries.pyopenwisp_monitoring/db/backends/influxdb2/tests.pyopenwisp_monitoring/device/base/models.pyopenwisp_monitoring/device/settings.pyopenwisp_monitoring/device/utils.pyopenwisp_monitoring/monitoring/base/models.pyopenwisp_monitoring/monitoring/migrations/__init__.pyopenwisp_monitoring/monitoring/migrations/influxdb/__init__.pyopenwisp_monitoring/monitoring/migrations/influxdb2/__init__.pyopenwisp_monitoring/monitoring/migrations/influxdb2/influxdb2_alter_structure_0006.pyopenwisp_monitoring/monitoring/tasks.pyopenwisp_monitoring/monitoring/tests/__init__.pyopenwisp_monitoring/monitoring/tests/test_configuration.pyrequirements.txtsetup.pytests/__init__.pytests/docker-entrypoint.shtests/openwisp2/settings.py
💤 Files with no reviewable changes (1)
- openwisp_monitoring/db/backends/influxdb/queries.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (11)
- GitHub Check: Python==3.12 | django~=5.2.0
- GitHub Check: Python==3.13 | django~=5.2.0
- GitHub Check: Python==3.13 | django~=5.1.0
- GitHub Check: Python==3.12 | django~=5.1.0
- GitHub Check: Python==3.10 | django~=5.2.0
- GitHub Check: Python==3.11 | django~=5.1.0
- GitHub Check: Python==3.12 | django~=4.2.0
- GitHub Check: Python==3.11 | django~=5.2.0
- GitHub Check: Python==3.11 | django~=4.2.0
- GitHub Check: Python==3.10 | django~=4.2.0
- GitHub Check: Python==3.10 | django~=5.1.0
🧰 Additional context used
📓 Path-based instructions (1)
{**/*.sh,**/*.{js,ts,tsx,jsx},**/*.py,**/*.rb,**/*.go,**/*.java,**/*.cs,**/*.cpp,**/*.c}
📄 CodeRabbit inference engine (Custom checks)
Cryptic or non-obvious code (regex, complex bash commands, or hard-to-read code) must include a concise comment explaining why it is needed and why the complexity is acceptable
Files:
openwisp_monitoring/monitoring/tests/test_configuration.pyopenwisp_monitoring/device/settings.pyopenwisp_monitoring/monitoring/migrations/__init__.pyopenwisp_monitoring/device/utils.pyopenwisp_monitoring/device/base/models.pyopenwisp_monitoring/db/__init__.pyopenwisp_monitoring/db/backends/__init__.pyopenwisp_monitoring/monitoring/tasks.pysetup.pyopenwisp_monitoring/monitoring/migrations/influxdb2/influxdb2_alter_structure_0006.pyopenwisp_monitoring/db/backends/influxdb2/tests.pyopenwisp_monitoring/db/backends/influxdb2/queries.pyopenwisp_monitoring/db/backends/influxdb/client.pyopenwisp_monitoring/monitoring/tests/__init__.pytests/openwisp2/settings.pyopenwisp_monitoring/db/backends/influxdb2/client.pyopenwisp_monitoring/monitoring/base/models.py
🧠 Learnings (6)
📚 Learning: 2026-02-21T01:03:37.822Z
Learnt from: nemesifier
Repo: openwisp/openwisp-monitoring PR: 738
File: openwisp_monitoring/tests/test_selenium.py:827-859
Timestamp: 2026-02-21T01:03:37.822Z
Learning: In Selenium tests (e.g., in openwisp_monitoring/tests/test_selenium.py and similar test files), when testing JS animations on dashboards or elements driven by JavaScript (such as real-time location updates), insert a short sleep (e.g., sleep(0.3)) before WebDriverWait assertions to allow animations to complete and reduce flakiness. Note: use this as a targeted workaround and prefer explicit waits or animation-end checks where possible to avoid relying on fixed delays.
Applied to files:
openwisp_monitoring/monitoring/tests/test_configuration.pyopenwisp_monitoring/monitoring/tests/__init__.py
📚 Learning: 2026-02-21T18:44:28.852Z
Learnt from: dee077
Repo: openwisp/openwisp-monitoring PR: 738
File: openwisp_monitoring/device/api/views.py:263-281
Timestamp: 2026-02-21T18:44:28.852Z
Learning: In openwisp-monitoring, MonitoringIndoorCoordinatesList inherits organization scoping from the parent IndoorCoordinatesList (from openwisp-controller), which uses FilterByParentManaged mixin and filters by location_id in get_queryset(). The child class only overrides the queryset attribute to add monitoring-specific select_related fields; this pattern is safe as long as get_queryset() from the parent is not bypassed. During reviews, verify that MonitoringIndoorCoordinatesList continues to rely on the parent's get_queryset() and that any added select_related fields in the child do not alter the parent's filtering logic.
Applied to files:
openwisp_monitoring/monitoring/tests/test_configuration.pyopenwisp_monitoring/device/settings.pyopenwisp_monitoring/monitoring/migrations/__init__.pyopenwisp_monitoring/device/utils.pyopenwisp_monitoring/device/base/models.pyopenwisp_monitoring/db/__init__.pyopenwisp_monitoring/db/backends/__init__.pyopenwisp_monitoring/monitoring/tasks.pyopenwisp_monitoring/monitoring/migrations/influxdb2/influxdb2_alter_structure_0006.pyopenwisp_monitoring/db/backends/influxdb2/tests.pyopenwisp_monitoring/db/backends/influxdb2/queries.pyopenwisp_monitoring/db/backends/influxdb/client.pyopenwisp_monitoring/monitoring/tests/__init__.pyopenwisp_monitoring/db/backends/influxdb2/client.pyopenwisp_monitoring/monitoring/base/models.py
📚 Learning: 2026-02-25T18:42:08.825Z
Learnt from: dee077
Repo: openwisp/openwisp-monitoring PR: 738
File: openwisp_monitoring/tests/test_selenium.py:309-317
Timestamp: 2026-02-25T18:42:08.825Z
Learning: In Selenium tests that use ChannelsLiveServerTestCase (e.g., openwisp_monitoring/tests/test_selenium.py), override settings to configure CHANNEL_LAYERS with channels_redis.core.RedisChannelLayer instead of InMemoryChannelLayer. This is required because the live server runs in a separate process from the test process, and InMemoryChannelLayer is per-process only and cannot handle cross-process WebSocket broadcasting needed for real-time location update tests.
Applied to files:
openwisp_monitoring/monitoring/tests/test_configuration.pyopenwisp_monitoring/monitoring/tests/__init__.pytests/openwisp2/settings.py
📚 Learning: 2026-03-14T18:39:04.626Z
Learnt from: UltraBot05
Repo: openwisp/openwisp-monitoring PR: 766
File: openwisp_monitoring/utils.py:59-68
Timestamp: 2026-03-14T18:39:04.626Z
Learning: In this repository (openwisp/openwisp-monitoring), the project targets Python 3.10–3.13 as defined by CI. Do not flag backports.zoneinfo as a missing dependency; zoneinfo is a built-in module in all supported Python versions. When reviewing Python code, assume zoneinfo is available and avoid suggesting installation of backports.zoneinfo. If a file imports zoneinfo or uses it for time zone handling, treat it as standard library usage compatible with the supported CI matrix.
Applied to files:
openwisp_monitoring/monitoring/tests/test_configuration.pyopenwisp_monitoring/device/settings.pyopenwisp_monitoring/monitoring/migrations/__init__.pyopenwisp_monitoring/device/utils.pyopenwisp_monitoring/device/base/models.pyopenwisp_monitoring/db/__init__.pyopenwisp_monitoring/db/backends/__init__.pyopenwisp_monitoring/monitoring/tasks.pysetup.pyopenwisp_monitoring/monitoring/migrations/influxdb2/influxdb2_alter_structure_0006.pyopenwisp_monitoring/db/backends/influxdb2/tests.pyopenwisp_monitoring/db/backends/influxdb2/queries.pyopenwisp_monitoring/db/backends/influxdb/client.pyopenwisp_monitoring/monitoring/tests/__init__.pytests/openwisp2/settings.pyopenwisp_monitoring/db/backends/influxdb2/client.pyopenwisp_monitoring/monitoring/base/models.py
📚 Learning: 2026-03-14T18:39:04.626Z
Learnt from: UltraBot05
Repo: openwisp/openwisp-monitoring PR: 766
File: openwisp_monitoring/utils.py:59-68
Timestamp: 2026-03-14T18:39:04.626Z
Learning: In the openwisp-monitoring project, targets are Linux-based environments. Do not flag a Windows-specific tzdata package as a missing dependency in code reviews for Python files (e.g., openwisp_monitoring/utils.py). If a platform-specific dependency is truly required, document the exception in review guidelines and ensure CI/packaging checks enforce platform constraints rather than manual review.
Applied to files:
openwisp_monitoring/monitoring/tests/test_configuration.pyopenwisp_monitoring/device/settings.pyopenwisp_monitoring/monitoring/migrations/__init__.pyopenwisp_monitoring/device/utils.pyopenwisp_monitoring/device/base/models.pyopenwisp_monitoring/db/__init__.pyopenwisp_monitoring/db/backends/__init__.pyopenwisp_monitoring/monitoring/tasks.pysetup.pyopenwisp_monitoring/monitoring/migrations/influxdb2/influxdb2_alter_structure_0006.pyopenwisp_monitoring/db/backends/influxdb2/tests.pyopenwisp_monitoring/db/backends/influxdb2/queries.pyopenwisp_monitoring/db/backends/influxdb/client.pyopenwisp_monitoring/monitoring/tests/__init__.pytests/openwisp2/settings.pyopenwisp_monitoring/db/backends/influxdb2/client.pyopenwisp_monitoring/monitoring/base/models.py
📚 Learning: 2026-03-14T18:39:18.872Z
Learnt from: UltraBot05
Repo: openwisp/openwisp-monitoring PR: 766
File: openwisp_monitoring/utils.py:59-68
Timestamp: 2026-03-14T18:39:18.872Z
Learning: The openwisp-monitoring project targets Python 3.10–3.13 exclusively (as defined by the CI matrix). `backports.zoneinfo` is not required and should not be flagged as a missing dependency. The `zoneinfo` module is available as a built-in standard library in all supported Python versions.
Applied to files:
Dockerfilerequirements.txt
🪛 Betterleaks (1.1.2)
openwisp_monitoring/monitoring/migrations/influxdb2/influxdb2_alter_structure_0006.py
[high] 38-38: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
openwisp_monitoring/db/backends/influxdb2/tests.py
[high] 29-29: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
tests/openwisp2/settings.py
[high] 44-44: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
🪛 Checkov (3.2.519)
Dockerfile
[low] 4-8: Ensure that APT isn't used
(CKV_DOCKER_9)
docker-compose.yml
[low] 39-40: Base64 High Entropy String
(CKV_SECRET_6)
🪛 Hadolint (2.14.0)
Dockerfile
[warning] 4-4: Do not use apt as it is meant to be an end-user tool, use apt-get or apt-cache instead
(DL3027)
[warning] 11-11: Pin versions in pip. Instead of pip install <package> use pip install <package>==<version> or pip install --requirement <requirements file>
(DL3013)
[warning] 11-11: Avoid use of cache directory with pip. Use pip install --no-cache-dir <package>
(DL3042)
🔇 Additional comments (5)
requirements.txt (1)
6-6: No issues withinfluxdb-client~=1.21.0constraint.Version 1.21.0 supports Python 3.7+, which covers the project's target range of 3.10–3.13. The codebase uses standard InfluxDB 2.x client APIs (write_api, query_api, buckets_api, delete_api, Point, WritePrecision) that are fully supported in this version. The CI pipeline tests this constraint against Python 3.10–3.13, confirming compatibility.
openwisp_monitoring/db/backends/influxdb2/queries.py (1)
1-272: These Flux templates are unused by the InfluxDB2 backend and do not require the changes described.The InfluxDB2 client's
get_query()method (lines 328–390 inopenwisp_monitoring/db/backends/influxdb2/client.py) completely ignores thequeryparameter. Instead, it:
- Dynamically builds Flux queries from scratch
- Uses
self.bucket(configured viaTIMESERIES_DB["BUCKET"]), not hardcoded"mybucket"- Extracts measurement from
params.get("key"), not a{measurement}placeholder- Properly handles
Noneforend_dateby converting it to"now()"(line 352)The templates in
queries.pyare imported and stored in chart configurations but are never formatted or executed by the InfluxDB2 backend. The backend generates queries directly without using these templates.> Likely an incorrect or invalid review comment.openwisp_monitoring/db/__init__.py (1)
5-5: No internal code issues with removingdefault_chart_queryanddevice_data_queryfrom__all__.Verification found no remaining imports of these names directly from
openwisp_monitoring.dbwithin the repository. All internal usages already access these as instance methods (e.g.,timeseries_db.default_chart_query(tags)), so the__all__reduction is safe. Consider documenting this as a breaking change in release notes for downstream users who may have imported these directly.openwisp_monitoring/monitoring/base/models.py (2)
870-874: Clarify x-axis label formatting in_parse_and_format_time.The method correctly drops seconds from x-axis labels using
strftime("%Y-%m-%d %H:%M"). However, note that this format is applied only to the x-axis display labels; the actual metric values intracesretain their full precision. Minute-level precision on chart x-axis labels is standard practice for readability. If compatibility with 1.x behavior is a concern, verify that earlier versions similarly formatted x-axis labels without seconds rather than exposing seconds for chart display.
843-853: This behavior is intentional—summary values are correctly nullified for non-aggregate queries.The test
test_read_summary_not_aggregate()explicitly validates that charts with non-aggregate queries (like the defaultSELECT {field_name} FROM {key}...) returnNonefor summary values. This is by design: only aggregate queries (containingMEAN(),SUM(), etc.) produce summary statistics.While the naming of
validate_query()could be clearer (it conflates validation with aggregation checking), the current implementation is correct and well-tested. No changes needed.
| DOCKER_INFLUXDB_INIT_MODE: setup | ||
| DOCKER_INFLUXDB_INIT_USERNAME: myuser | ||
| DOCKER_INFLUXDB_INIT_PASSWORD: mypassword | ||
| DOCKER_INFLUXDB_INIT_ORG: myorg | ||
| DOCKER_INFLUXDB_INIT_BUCKET: mybucket | ||
| DOCKER_INFLUXDB_INIT_RETENTION: 1w |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify whether the Compose/app config already wires an InfluxDB 2.x token.
# Expected: a deterministic token is configured for both the influxdb2 service and the app.
rg -n -C3 'DOCKER_INFLUXDB_INIT_ADMIN_TOKEN|INFLUX.*TOKEN|TIMESERIES_DB.*TOKEN'Repository: openwisp/openwisp-monitoring
Length of output: 2295
🏁 Script executed:
# Find and inspect the docker-compose.yml file
fd -name "docker-compose.yml" -o -name "docker-compose*.yml" | head -5Repository: openwisp/openwisp-monitoring
Length of output: 302
🏁 Script executed:
# Check the full influxdb2 service configuration
rg -n "influxdb2" -A 20 --type yamlRepository: openwisp/openwisp-monitoring
Length of output: 1266
🏁 Script executed:
# Find Django settings file to see TIMESERIES_DB configuration
fd -name "settings.py" | head -5Repository: openwisp/openwisp-monitoring
Length of output: 302
🏁 Script executed:
# Find settings files
find . -name "settings.py" -o -name "settings_*.py" | grep -v node_modulesRepository: openwisp/openwisp-monitoring
Length of output: 261
🏁 Script executed:
# Check monitoring service and full docker-compose structure
rg -n "monitoring:" -A 30 docker-compose.ymlRepository: openwisp/openwisp-monitoring
Length of output: 858
🏁 Script executed:
# Search for TIMESERIES_DB configuration in settings
rg "TIMESERIES_DB\s*=" -B 2 -A 10Repository: openwisp/openwisp-monitoring
Length of output: 2795
🏁 Script executed:
# Get full monitoring service environment configuration
rg -n "monitoring:" -A 50 docker-compose.yml | head -60Repository: openwisp/openwisp-monitoring
Length of output: 1465
🏁 Script executed:
# Check if there are docs about InfluxDB 2.x setup
fd README -o -name "*.md" | xargs grep -l "influxdb\|InfluxDB" 2>/dev/null | head -3Repository: openwisp/openwisp-monitoring
Length of output: 302
🏁 Script executed:
# Look for any environment variable setup for TIMESERIES_DATABASE
rg "TIMESERIES_DATABASE|DOCKER_INFLUXDB" docker-compose.ymlRepository: openwisp/openwisp-monitoring
Length of output: 325
🏁 Script executed:
# Check git log for context on why influxdb2 was added
git log --oneline docker-compose.yml | head -5Repository: openwisp/openwisp-monitoring
Length of output: 163
🏁 Script executed:
# Search for any documentation about InfluxDB 2.x setup
find . -name "*.md" -type f | xargs grep -l "influxdb2\|InfluxDB 2" 2>/dev/nullRepository: openwisp/openwisp-monitoring
Length of output: 56
🏁 Script executed:
# Check if there's any environment setup or test configuration for InfluxDB 2.x
rg "influxdb2|InfluxDB 2" tests/ --type py | head -10Repository: openwisp/openwisp-monitoring
Length of output: 206
🏁 Script executed:
# Get full TIMESERIES_DATABASE configuration from test settings
rg "TIMESERIES_DATABASE|For InfluxDB 2.x" tests/openwisp2/settings.py -A 10 -B 2Repository: openwisp/openwisp-monitoring
Length of output: 960
Add deterministic InfluxDB 2.x admin token to the Compose configuration.
The influxdb2 service is missing DOCKER_INFLUXDB_INIT_ADMIN_TOKEN. Without it, InfluxDB 2.x generates a random token on first run, making the stack non-reproducible. The application requires a deterministic token in its TIMESERIES_DATABASE["TOKEN"] configuration to authenticate.
Additionally, the monitoring service currently depends on influxdb (1.x) and has no environment configuration to use InfluxDB 2.x. If switching to InfluxDB 2.x, update the monitoring service to depend on influxdb2 and wire the token via environment variable.
Minimal fix for influxdb2 service
DOCKER_INFLUXDB_INIT_PASSWORD: mypassword
+ DOCKER_INFLUXDB_INIT_ADMIN_TOKEN: mytoken
DOCKER_INFLUXDB_INIT_ORG: myorg
DOCKER_INFLUXDB_INIT_BUCKET: mybucket📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| DOCKER_INFLUXDB_INIT_MODE: setup | |
| DOCKER_INFLUXDB_INIT_USERNAME: myuser | |
| DOCKER_INFLUXDB_INIT_PASSWORD: mypassword | |
| DOCKER_INFLUXDB_INIT_ORG: myorg | |
| DOCKER_INFLUXDB_INIT_BUCKET: mybucket | |
| DOCKER_INFLUXDB_INIT_RETENTION: 1w | |
| DOCKER_INFLUXDB_INIT_MODE: setup | |
| DOCKER_INFLUXDB_INIT_USERNAME: myuser | |
| DOCKER_INFLUXDB_INIT_PASSWORD: mypassword | |
| DOCKER_INFLUXDB_INIT_ADMIN_TOKEN: mytoken | |
| DOCKER_INFLUXDB_INIT_ORG: myorg | |
| DOCKER_INFLUXDB_INIT_BUCKET: mybucket | |
| DOCKER_INFLUXDB_INIT_RETENTION: 1w |
🧰 Tools
🪛 Checkov (3.2.519)
[low] 39-40: Base64 High Entropy String
(CKV_SECRET_6)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docker-compose.yml` around lines 37 - 42, Add a deterministic admin token by
setting DOCKER_INFLUXDB_INIT_ADMIN_TOKEN in the influxdb2 service environment
and wire that same token into your app config (TIMESERIES_DATABASE["TOKEN"]) so
the stack is reproducible; also update the monitoring service to depend on
influxdb2 (replace depends_on: influxdb) and add an environment variable (e.g.,
INFLUXDB_TOKEN or TIMESERIES_DATABASE_TOKEN) populated with the same admin token
so monitoring can authenticate against InfluxDB 2.x.
| if "BACKEND" in TIMESERIES_DB and "2" in TIMESERIES_DB["BACKEND"]: | ||
| # InfluxDB 2.x specific checks | ||
| assert "TOKEN" in TIMESERIES_DB, "TOKEN" | ||
| assert "ORG" in TIMESERIES_DB, "ORG" | ||
| assert "BUCKET" in TIMESERIES_DB, "BUCKET" | ||
| else: | ||
| # InfluxDB 1.x specific checks | ||
| assert "USER" in TIMESERIES_DB, "USER" | ||
| assert "PASSWORD" in TIMESERIES_DB, "PASSWORD" | ||
| assert "NAME" in TIMESERIES_DB, "NAME" |
There was a problem hiding this comment.
Fragile backend detection via substring match.
"2" in TIMESERIES_DB["BACKEND"] will match any backend path containing the character "2" (e.g., a custom ...backends.timescale2, or a path that merely happens to contain a 2), routing it through the InfluxDB 2.x validation branch and demanding TOKEN/ORG/BUCKET. Match on the backend name explicitly.
🔧 Proposed fix
- if "BACKEND" in TIMESERIES_DB and "2" in TIMESERIES_DB["BACKEND"]:
+ if TIMESERIES_DB["BACKEND"] == "openwisp_monitoring.db.backends.influxdb2":
# InfluxDB 2.x specific checks
assert "TOKEN" in TIMESERIES_DB, "TOKEN"
assert "ORG" in TIMESERIES_DB, "ORG"
assert "BUCKET" in TIMESERIES_DB, "BUCKET"
else:
# InfluxDB 1.x specific checks
assert "USER" in TIMESERIES_DB, "USER"
assert "PASSWORD" in TIMESERIES_DB, "PASSWORD"
assert "NAME" in TIMESERIES_DB, "NAME"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@openwisp_monitoring/db/backends/__init__.py` around lines 35 - 44, The
backend detection is fragile because it uses `"2" in TIMESERIES_DB["BACKEND"]`;
change it to an explicit backend-name match: extract the backend basename (e.g.
backend_name = TIMESERIES_DB["BACKEND"].split(".")[-1]) and compare it to the
exact InfluxDB 2 backend identifier (e.g. 'influxdb_v2' or the concrete module
basename your project uses) instead of substring matching, then run the
TOKEN/ORG/BUCKET checks only when that equality matches; otherwise run the
InfluxDB 1.x USER/PASSWORD/NAME checks.
| def get_list_query(self, query, precision="s", **kwargs): | ||
| result = self.query(query, precision=precision) |
There was a problem hiding this comment.
Forward get_list_query keyword arguments to query.
The new **kwargs parameter is currently ignored, so callers that pass database, params, or expected_response_code get the default query behavior instead.
🐛 Proposed fix
def get_list_query(self, query, precision="s", **kwargs):
- result = self.query(query, precision=precision)
+ result = self.query(query, precision=precision, **kwargs)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@openwisp_monitoring/db/backends/influxdb/client.py` around lines 349 - 350,
The get_list_query function currently accepts **kwargs but doesn't pass them to
self.query, so callers' options like database, params, or expected_response_code
are ignored; update get_list_query (in
openwisp_monitoring.db.backends.influxdb.client.Client) to call
self.query(query, precision=precision, **kwargs) so all keyword args are
forwarded to the query method and behave as intended.
| def default_chart_query(self, tags): | ||
| q = "SELECT {field_name} FROM {key} WHERE time >= '{time}'" | ||
| if tags: | ||
| q += " AND content_type = '{content_type}' AND object_id = '{object_id}'" | ||
| return q |
There was a problem hiding this comment.
Preserve end-date filtering in default chart queries.
_clean_params() prepares an {end_date} fragment, but default_chart_query() never includes it, so default InfluxDB 1.x chart queries ignore explicit end dates.
🐛 Proposed fix
def default_chart_query(self, tags):
- q = "SELECT {field_name} FROM {key} WHERE time >= '{time}'"
+ q = "SELECT {field_name} FROM {key} WHERE time >= '{time}' {end_date}"
if tags:
q += " AND content_type = '{content_type}' AND object_id = '{object_id}'"
return q🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@openwisp_monitoring/db/backends/influxdb/client.py` around lines 572 - 576,
default_chart_query currently builds a WHERE clause but never incorporates the
{end_date} fragment prepared by _clean_params, so explicit end-date filters are
ignored; modify default_chart_query (the function named default_chart_query) to
include the {end_date} placeholder in the returned query string (e.g., append or
insert {end_date} into the WHERE clause alongside or before the tags fragment)
while preserving the existing tag handling for content_type and object_id so
queries produced will include both tag filters and the end_date produced by
_clean_params.
| def __init__(self, bucket=None, org=None, token=None, url=None): | ||
| self.bucket = bucket or TIMESERIES_DB["BUCKET"] | ||
| self.org = org or TIMESERIES_DB["ORG"] | ||
| self.token = token or TIMESERIES_DB["TOKEN"] | ||
| self.url = url or f'http://{TIMESERIES_DB["HOST"]}:{TIMESERIES_DB["PORT"]}' | ||
| self.client = InfluxDBClient(url=self.url, token=self.token, org=self.org) | ||
| self.write_api = self.client.write_api(write_options=SYNCHRONOUS) | ||
| self.query_api = self.client.query_api() | ||
| self.forbidden_pattern = re.compile( | ||
| r"\b(" + "|".join(self._FORBIDDEN) + r")\b", re.IGNORECASE | ||
| ) | ||
| self.client_error = InfluxDBClientError | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Constructing InfluxDBClient in __init__ without timeout or https handling.
A few operational concerns on the constructor:
self.url = ... f'http://{TIMESERIES_DB["HOST"]}:{TIMESERIES_DB["PORT"]}'hardcodeshttp://. Production deployments typically need https; consider honoring a scheme from settings.- No
timeoutargument — the defaultInfluxDBClienttimeout of 10s may be too long (or too short) depending on the request thread context (this client is invoked from Celery tasks and request threads). self.write_api = self.client.write_api(write_options=SYNCHRONOUS)andself.query_api = self.client.query_api()are eagerly created per instance; make sure the lifecycle (andclose()at line 506-507) is managed appropriately under the caching/connection-reuse pattern used elsewhere.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@openwisp_monitoring/db/backends/influxdb2/client.py` around lines 53 - 65,
The constructor __init__ builds self.url with a hardcoded "http://" and creates
InfluxDBClient, write_api and query_api eagerly without a timeout or scheme
handling and may leak resources; update __init__ to honor a configurable scheme
(e.g. TIMESERIES_DB["SCHEME"] or accept url if passed), pass a configurable
timeout into InfluxDBClient (expose a timeout param/default from settings), and
avoid eagerly creating write_api/query_api per instance if connection
reuse/caching is expected—either lazy-initialize them on first use or ensure
close() in close() properly cleans up these APIs; reference the __init__ method,
self.url, TIMESERIES_DB, InfluxDBClient, write_api, query_api and close() when
making these changes.
| INFLUXDB_BACKEND = TIMESERIES_DB.get("BACKEND") | ||
| TIMESERIES_DB = getattr(settings, "TIMESERIES_DATABASE", None) | ||
| TEST_DB = f"{TIMESERIES_DB['NAME']}" if "NAME" in TIMESERIES_DB else "test_db" | ||
| TEST_BUCKET = f"{TIMESERIES_DB['BUCKET']}" | ||
| TEST_ORG = f"{TIMESERIES_DB['ORG']}" | ||
| TEST_TOKEN = f"{TIMESERIES_DB['TOKEN']}" | ||
|
|
||
| if INFLUXDB_BACKEND == "openwisp_monitoring.db.backends.influxdb": | ||
| # InfluxDB 1.x configuration | ||
| ORIGINAL_DB = TIMESERIES_DB["NAME"] | ||
| TEST_DB = f"{ORIGINAL_DB}" | ||
| elif INFLUXDB_BACKEND == "openwisp_monitoring.db.backends.influxdb2": | ||
| # InfluxDB 2.x configuration | ||
| ORG_BUCKET = f"{TIMESERIES_DB['ORG']}/{TIMESERIES_DB['BUCKET']}" | ||
| ORIGINAL_DB = ORG_BUCKET | ||
| TEST_DB = f"{ORG_BUCKET}" | ||
| else: | ||
| ORIGINAL_DB = None | ||
| TEST_DB = None |
There was a problem hiding this comment.
Class-body dict accesses will raise KeyError under InfluxDB 1.x.
Lines 285–287 unconditionally evaluate TIMESERIES_DB['BUCKET'], TIMESERIES_DB['ORG'], and TIMESERIES_DB['TOKEN'], but these keys only exist in INFLUXDB_2x_DATABASE. Since this code runs at class-definition time (on module import), running any test under the default InfluxDB 1.x configuration will raise KeyError before the backend-specific branch below is even reached, breaking the entire test module.
Move these into the branch that actually needs them, or use .get().
🔧 Proposed fix
class TestMonitoringMixin(TestOrganizationMixin):
INFLUXDB_BACKEND = TIMESERIES_DB.get("BACKEND")
TIMESERIES_DB = getattr(settings, "TIMESERIES_DATABASE", None)
- TEST_DB = f"{TIMESERIES_DB['NAME']}" if "NAME" in TIMESERIES_DB else "test_db"
- TEST_BUCKET = f"{TIMESERIES_DB['BUCKET']}"
- TEST_ORG = f"{TIMESERIES_DB['ORG']}"
- TEST_TOKEN = f"{TIMESERIES_DB['TOKEN']}"
-
if INFLUXDB_BACKEND == "openwisp_monitoring.db.backends.influxdb":
- # InfluxDB 1.x configuration
ORIGINAL_DB = TIMESERIES_DB["NAME"]
- TEST_DB = f"{ORIGINAL_DB}"
+ TEST_DB = ORIGINAL_DB
elif INFLUXDB_BACKEND == "openwisp_monitoring.db.backends.influxdb2":
- # InfluxDB 2.x configuration
- ORG_BUCKET = f"{TIMESERIES_DB['ORG']}/{TIMESERIES_DB['BUCKET']}"
- ORIGINAL_DB = ORG_BUCKET
- TEST_DB = f"{ORG_BUCKET}"
+ TEST_BUCKET = TIMESERIES_DB["BUCKET"]
+ TEST_ORG = TIMESERIES_DB["ORG"]
+ TEST_TOKEN = TIMESERIES_DB["TOKEN"]
+ ORIGINAL_DB = f"{TEST_ORG}/{TEST_BUCKET}"
+ TEST_DB = ORIGINAL_DB
else:
ORIGINAL_DB = None
TEST_DB = None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@openwisp_monitoring/monitoring/tests/__init__.py` around lines 282 - 300, The
class/module currently dereferences TIMESERIES_DB['BUCKET'], ['ORG'], and
['TOKEN'] at import time which will KeyError for InfluxDB 1.x; update the module
to only access these keys inside the INFLUXDB_BACKEND ==
"openwisp_monitoring.db.backends.influxdb2" branch (or use
TIMESERIES_DB.get('BUCKET'), .get('ORG'), .get('TOKEN') with safe defaults) and
ensure TEST_BUCKET, TEST_ORG and TEST_TOKEN are assigned either inside that 2.x
branch or via safe .get() calls so that the INFLUXDB_BACKEND ==
"openwisp_monitoring.db.backends.influxdb" path (and the else path) does not
attempt to read missing keys; keep ORIGINAL_DB and TEST_DB assignments as-is but
move/guard any 2.x-only key usage accordingly.
| "influxdb2": ( | ||
| 'from(bucket: "mybucket")' | ||
| " |> range(start: {time}, stop: {end_date})" | ||
| ' |> filter(fn: (r) => r._measurement == "{measurement}")' | ||
| ' |> filter(fn: (r) => r._field == "{field_name}")' | ||
| ' |> filter(fn: (r) => r.content_type == "{content_type}")' | ||
| ' |> filter(fn: (r) => r.object_id == "{object_id}")' | ||
| " |> sum()" | ||
| ' |> yield(name: "histogram")' | ||
| ), |
There was a problem hiding this comment.
Use the runtime key placeholder for the Flux measurement.
This test metric defines key and field_name, but not measurement; rendering the InfluxDB 2.x query for this registered metric will fail with a missing placeholder.
🐛 Proposed fix
- ' |> filter(fn: (r) => r._measurement == "{measurement}")'
+ ' |> filter(fn: (r) => r._measurement == "{key}")'📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "influxdb2": ( | |
| 'from(bucket: "mybucket")' | |
| " |> range(start: {time}, stop: {end_date})" | |
| ' |> filter(fn: (r) => r._measurement == "{measurement}")' | |
| ' |> filter(fn: (r) => r._field == "{field_name}")' | |
| ' |> filter(fn: (r) => r.content_type == "{content_type}")' | |
| ' |> filter(fn: (r) => r.object_id == "{object_id}")' | |
| " |> sum()" | |
| ' |> yield(name: "histogram")' | |
| ), | |
| "influxdb2": ( | |
| 'from(bucket: "mybucket")' | |
| " |> range(start: {time}, stop: {end_date})" | |
| ' |> filter(fn: (r) => r._measurement == "{key}")' | |
| ' |> filter(fn: (r) => r._field == "{field_name}")' | |
| ' |> filter(fn: (r) => r.content_type == "{content_type}")' | |
| ' |> filter(fn: (r) => r.object_id == "{object_id}")' | |
| " |> sum()" | |
| ' |> yield(name: "histogram")' | |
| ), |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@openwisp_monitoring/monitoring/tests/test_configuration.py` around lines 38 -
47, The InfluxDB2 Flux template in the "influxdb2" entry uses the
"{measurement}" placeholder but the test registers metrics with a runtime key
named key (and field_name) so rendering will fail; update the Flux string in the
"influxdb2" mapping to use the runtime "{key}" placeholder for the measurement
(replace "{measurement}" with "{key}") so the test metric that defines key and
field_name renders correctly (look for the "influxdb2" tuple and the filter
using _measurement).
| influxdb~=5.3.2 | ||
| django-nested-admin~=4.1.6 | ||
| python-dateutil>=2.7.0,<3.0.0 | ||
| openwisp-utils[rest] @ https://github.com/openwisp/openwisp-utils/tarball/master |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify moving-branch URL dependencies.
# Expected: no runtime requirement points at master/main.
rg -n -C2 '@ https://github\.com/.*/(tarball|archive/refs/heads)/(master|main)|/tarball/(master|main)' requirements*.txtRepository: openwisp/openwisp-monitoring
Length of output: 309
🏁 Script executed:
#!/bin/bash
# Check if there are comments in requirements.txt explaining the master branch dependency
head -20 requirements.txtRepository: openwisp/openwisp-monitoring
Length of output: 349
🏁 Script executed:
#!/bin/bash
# Check git log for requirements.txt to see when master pin was added and why
git log --oneline -n 20 requirements.txtRepository: openwisp/openwisp-monitoring
Length of output: 163
🏁 Script executed:
#!/bin/bash
# Check if openwisp-utils releases are documented or available
# Look for any documentation mentioning version compatibility
find . -type f \( -name "*.md" -o -name "*.rst" -o -name "*.txt" \) -exec grep -l "openwisp-utils" {} \; 2>/dev/null | head -10Repository: openwisp/openwisp-monitoring
Length of output: 152
🏁 Script executed:
#!/bin/bash
# Check README for openwisp-utils context
head -50 README.rstRepository: openwisp/openwisp-monitoring
Length of output: 2175
🏁 Script executed:
#!/bin/bash
# Check requirements-test.txt for additional constraints
cat requirements-test.txtRepository: openwisp/openwisp-monitoring
Length of output: 256
🏁 Script executed:
#!/bin/bash
# Check if there's a setup.py or pyproject.toml that might have version constraints
git ls-files | grep -E '(setup\.py|setup\.cfg|pyproject\.toml|tox\.ini|\.github)' | head -20Repository: openwisp/openwisp-monitoring
Length of output: 452
🏁 Script executed:
#!/bin/bash
# Check setup.py for dependency constraints
cat setup.pyRepository: openwisp/openwisp-monitoring
Length of output: 2063
🏁 Script executed:
#!/bin/bash
# Check pyproject.toml for dependency constraints
cat pyproject.tomlRepository: openwisp/openwisp-monitoring
Length of output: 593
🏁 Script executed:
#!/bin/bash
# Check CI workflow to understand dependency resolution strategy
head -100 .github/workflows/ci.ymlRepository: openwisp/openwisp-monitoring
Length of output: 2607
Pin openwisp-utils[rest] to match the stable branch used in test requirements.
The main requirements.txt pins to master while requirements-test.txt pins to 1.3. This inconsistency causes non-reproducible development builds: a future upstream change to master can break the development environment without any change to this repository. The test suite itself runs against the stable 1.3 branch, making the master pin in the main requirements problematic.
🔒 Proposed fix direction
-openwisp-utils[rest] @ https://github.com/openwisp/openwisp-utils/tarball/master
+openwisp-utils[rest] @ https://github.com/openwisp/openwisp-utils/archive/refs/heads/1.3.tar.gz📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| openwisp-utils[rest] @ https://github.com/openwisp/openwisp-utils/tarball/master | |
| openwisp-utils[rest] @ https://github.com/openwisp/openwisp-utils/archive/refs/heads/1.3.tar.gz |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@requirements.txt` at line 5, The dependency entry "openwisp-utils[rest] @
https://github.com/openwisp/openwisp-utils/tarball/master" is pinned to master
and must be aligned with the stable test branch; change the package spec to pin
the same stable version used in requirements-test.txt (e.g., replace the
"master" tarball URL with the corresponding "1.3" tag or a direct version spec
"openwisp-utils[rest]==1.3") so main requirements and requirements-test are
consistent.
| "Development Status :: 3 - Alpha", | ||
| "Environment :: Web Environment", | ||
| "Topic :: Internet :: WWW/HTTP", | ||
| "Topic :: System :: Networking", | ||
| "Intended Audience :: Developers", | ||
| "License :: OSI Approved :: GNU General Public License v3 (GPLv3)", | ||
| "Operating System :: OS Independent", | ||
| "Framework :: Django", | ||
| "Programming Language :: Python :: 2.7", | ||
| "Programming Language :: Python :: 3.4", | ||
| "Programming Language :: Python :: 3.6", | ||
| "Programming Language :: Python :: 3.7", | ||
| "Programming Language :: Python :: 3.8", | ||
| "Programming Language :: Python :: 3.9", | ||
| "Programming Language :: Python :: 3.10", |
There was a problem hiding this comment.
Classifier metadata does not match the project's actual support matrix.
Development Status :: 3 - Alphadowngrades the entire package from5 - Production/Stable. InfluxDB 2.x support is new, but the package as a whole is not alpha — this misrepresents maturity to PyPI consumers.Programming Language :: Python :: 3.6–3.10contradicts the repository's actual CI matrix, which targets Python 3.10–3.13. Listing 3.6–3.9 claims support for versions not tested and omits 3.11–3.13.
Based on learnings that this project targets Python 3.10–3.13 as defined by CI.
🔧 Proposed fix
- "Development Status :: 3 - Alpha",
+ "Development Status :: 5 - Production/Stable",
"Environment :: Web Environment",
...
- "Programming Language :: Python :: 3.6",
- "Programming Language :: Python :: 3.7",
- "Programming Language :: Python :: 3.8",
- "Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
+ "Programming Language :: Python :: 3.12",
+ "Programming Language :: Python :: 3.13",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@setup.py` around lines 45 - 57, Update the package classifiers in setup.py to
reflect the actual project maturity and supported Python versions: change
"Development Status :: 3 - Alpha" to "Development Status :: 5 -
Production/Stable" and replace the Python classifiers "Programming Language ::
Python :: 3.6" through "3.10" with the CI-supported range "Programming Language
:: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming
Language :: Python :: 3.12", and "Programming Language :: Python :: 3.13" in the
classifiers list used in the setup() metadata so the metadata accurately matches
the repository CI matrix.
| INFLUXDB_2x_DATABASE = { | ||
| "BACKEND": "openwisp_monitoring.db.backends.influxdb2", | ||
| "TOKEN": "dltiEmsmMKU__9SoBE0ingFdMTS3UksrESwIQDNtW_3WOgn8bQGdyYzPcx_aDtvZkqvR8RbMkwVVlzUJxpm62w==", | ||
| "ORG": "myorg", | ||
| "BUCKET": "mybucket", | ||
| "HOST": os.getenv("INFLUXDB_HOST", "localhost"), | ||
| "PORT": "8086", | ||
| } |
There was a problem hiding this comment.
Read InfluxDB 2.x TOKEN (and ideally ORG/BUCKET) from environment.
The token is hardcoded in version control. Even if this is a local/dev-only token, committing credentials is a bad practice — static analysis flagged it as a generic API key, and it encourages reuse of the same value across environments. Make it configurable via an environment variable with a placeholder default, consistent with how INFLUXDB_HOST is handled.
🔧 Proposed fix
INFLUXDB_2x_DATABASE = {
"BACKEND": "openwisp_monitoring.db.backends.influxdb2",
- "TOKEN": "dltiEmsmMKU__9SoBE0ingFdMTS3UksrESwIQDNtW_3WOgn8bQGdyYzPcx_aDtvZkqvR8RbMkwVVlzUJxpm62w==",
- "ORG": "myorg",
- "BUCKET": "mybucket",
+ "TOKEN": os.getenv("INFLUXDB2_TOKEN", "test-token-placeholder"),
+ "ORG": os.getenv("INFLUXDB2_ORG", "myorg"),
+ "BUCKET": os.getenv("INFLUXDB2_BUCKET", "mybucket"),
"HOST": os.getenv("INFLUXDB_HOST", "localhost"),
"PORT": "8086",
}🧰 Tools
🪛 Betterleaks (1.1.2)
[high] 44-44: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/openwisp2/settings.py` around lines 42 - 49, The INFLUXDB_2x_DATABASE
dict currently hardcodes sensitive values; update it so TOKEN (and preferably
ORG and BUCKET) are read from environment variables using os.getenv with
non-secret placeholder defaults (e.g., os.getenv("INFLUXDB_TOKEN",
"change-me-token"), os.getenv("INFLUXDB_ORG", "myorg"),
os.getenv("INFLUXDB_BUCKET", "mybucket")) instead of literal strings; modify the
TOKEN, ORG, and BUCKET entries in INFLUXDB_2x_DATABASE to use these os.getenv
calls so credentials are configurable per environment.
Checklist
Reference to Existing Issue
Not applicable
Description of Changes
This pull request revives and modernizes the InfluxDB 2.x support work originally proposed in #584, adapting it to the current master branch and current project conventions.
Screenshot
Please include any relevant screenshots.