test(inputs.docker): Refactor unit-tests#18814
test(inputs.docker): Refactor unit-tests#18814srebhan wants to merge 3 commits intoinfluxdata:masterfrom
Conversation
| return func(string, *tls.Config) (dockerClient, error) { | ||
| return newClientFromData(data, windows), err | ||
| } |
There was a problem hiding this comment.
If readContainerData fails, data is nil but newClientFromData unconditionally reads data.stats / data.statsWindows first -> nil deref panic before the error is returned.
| return func(string, *tls.Config) (dockerClient, error) { | |
| return newClientFromData(data, windows), err | |
| } | |
| return func(string, *tls.Config) (dockerClient, error) { | |
| if err != nil { | |
| return nil, err | |
| } | |
| return newClientFromData(data, windows), nil | |
| } |
In practice the testdata files exist, so this won't fire today, but a missing/renamed fixture would produce a confusing panic instead of a clean error.
There was a problem hiding this comment.
Well I would like to keep the panic as this is a coding error, not an error in the plugin code or anything. Reading the data should never fail and I will fix the function in a follow up PR that introduces "test case" based tests.
| } | ||
| } | ||
|
|
||
| require.Equal(t, expected, actual) | ||
| require.Subset(t, tt.expected, actual) |
There was a problem hiding this comment.
The new code uses one-directional require.Subset with inconsistent direction:
- TestContainerNames: require.Subset(t, tt.expected, actual) -> only catches extra names; passes silently if expected names are missing (e.g. "Match all containers" expects 5 names, but actual = [etcd] would still pass).
- TestContainerStateFilter: require.Subset(t, actual, tt.expected) -> only catches missing names; passes silently if extra/wrong-state containers leak through.
- TestContainerLabels (map): fine - preserves the original per-key check.
For the two list cases, please use require.ElementsMatch(t, tt.expected, actual) to restore bidirectional, order-independent equality.
There was a problem hiding this comment.
Which is exactly what we want. We want to test if the expected metrics are among the actual metrics which is what the previous tests did. This will all go away with test-case based tests but I wanted to reproduce the previous behavior in those cases.
Co-authored-by: skartikey <s.kartikey@gmail.com>
|
Download PR build artifacts for linux_amd64.tar.gz, darwin_arm64.tar.gz, and windows_amd64.zip. 📦 Click here to get additional PR build artifactsArtifact URLs |
Summary
This PR refactor unit-tests to get rid of test-data-definitions in code. Instead the PR introduces JSON files containing the test data. Additionally, the PR moves testing to
testutilfunction wherever possible, unifies naming and test-structure.Follow-up PR(s) will then move tests into a test-case based form to ease addition of new tests and simplification of test-case.
This PR makes no functional changes!
Checklist
Related issues