You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
English | [Japanese](https://github.com/r3-yamauchi/dify-my-aws-tools-plugin/blob/main/readme/README_ja_JP.md)
@@ -41,6 +41,7 @@ Included tools:
41
41
- S3 List Buckets
42
42
- S3 Create Bucket
43
43
- S3 List Objects
44
+
- CloudFront Create Invalidation
44
45
- DynamoDB Manager
45
46
- Agentcore Code Interpreter
46
47
- Agentcore Memory
@@ -53,8 +54,34 @@ This project is distributed under the Apache License 2.0. See `LICENSE` for the
53
54
## Feature Highlights by Category
54
55
55
56
### Amazon Bedrock
57
+
56
58
-**Bedrock Retrieve** – Calls the `bedrock-agent-runtime` Retrieve API to run semantic or hybrid searches against a selected Knowledge Base. You can switch metadata filters, result counts, and Bedrock Reranking models (cohere.rerank-v3-5 / amazon.rerank-v1), and receive outputs as JSON or ranked text.
59
+
60
+
```json
61
+
{
62
+
"knowledge_base_id": "ABCDEFG8H9",
63
+
"query": "latest product roadmap",
64
+
"search_type": "HYBRID",
65
+
"max_results": 5,
66
+
"reranking_model": "amazon.rerank-v1"
67
+
}
68
+
```
69
+
57
70
-**Bedrock Retrieve and Generate** – Wraps `retrieve_and_generate` so KNOWLEDGE_BASE or EXTERNAL_SOURCES flows run in a single call. Supplying `session_configuration` and `session_id` lets Bedrock maintain session state, and the tool returns the text plus citation metadata.
71
+
72
+
```json
73
+
{
74
+
"knowledge_base_id": "ABCDEFG8H9",
75
+
"query": "Summarize the incident response runbook",
76
+
"generation_configuration": {
77
+
"promptTemplate": "Using the KB, summarize concisely: {{query}}"
-**Apply Guardrail** – Uses Bedrock Runtime `apply_guardrail` with these features:
59
86
- Inputs: `content` array (multiple texts and/or images via bytes or S3 URI) or a single `text` that is auto-chunked into 1000-character pieces and wrapped as content.
60
87
-`source`: PREPROCESS (default) or POSTPROCESS to target pre/post LLM stages.
@@ -93,65 +120,265 @@ This project is distributed under the Apache License 2.0. See `LICENSE` for the
93
120
]
94
121
}
95
122
```
123
+
96
124
-**Nova Canvas** – Invokes Nova Canvas v1 for TEXT_IMAGE, COLOR_GUIDED, IMAGE_VARIATION, INPAINTING, OUTPAINTING, and BACKGROUND_REMOVAL tasks. Input images are fetched from S3 and outputs are uploaded back while also streamed to Dify as PNG blobs.
-**Nova Reel** – Uses Nova Reel v1 to create videos from text or from a seed image. Results are saved as MP4 files in the specified S3 path, and synchronous mode polls until completion to return the binary.
-**Extract Frame** – Downloads GIF animations and extracts evenly spaced PNG frames. Users choose the number of frames (from two for first/last to any higher count), and each frame is returned as binary output.
101
148
149
+
```json
150
+
{
151
+
"gif_url": "https://example.com/anim.gif",
152
+
"frame_count": 4
153
+
}
154
+
```
155
+
102
156
-**Lambda YAML to JSON** – Calls a Lambda function synchronously with YAML text in the request body and returns the JSON body only when the Lambda responds with `statusCode` 200.
103
157
158
+
```yaml
159
+
lambda_name: yaml-to-json
160
+
yaml_content: |
161
+
key: value
162
+
list:
163
+
- a
164
+
- b
165
+
```
166
+
104
167
- **Bedrock KB List** – Calls `list_knowledge_bases` to enumerate available knowledge bases, returning summaries (status, creation date, vector store) and pagination tokens for downstream filtering.
168
+
169
+
```json
170
+
{
171
+
"max_results": 20
172
+
}
173
+
```
174
+
105
175
- **Bedrock KB Data Sources** – Invokes `list_data_sources` for a given knowledge base, returning connector information, synchronization state, and pagination tokens so you can select the correct source before running ingestion jobs.
176
+
177
+
```json
178
+
{
179
+
"knowledge_base_id": "ABCDEFG8H9",
180
+
"max_results": 10
181
+
}
182
+
```
183
+
106
184
- **Bedrock KB Sync** – Calls `StartIngestionJob` for a given knowledge base/data source pair so you can synchronize documents on demand, optionally setting a client token or deletion policy.
107
185
186
+
```json
187
+
{
188
+
"knowledge_base_id": "ABCDEFG8H9",
189
+
"data_source_id": "ds-001",
190
+
"client_token": "sync-20250227"
191
+
}
192
+
```
193
+
108
194
### Storage & Database Operations
109
-
-**S3 Operator** – Reads or writes text content to `s3://` URIs and optionally produces presigned URLs. `write` uploads UTF-8 text; `read` returns either the text body or a presigned link.
110
-
-**S3 File Uploader** – Accepts a file emitted by an upstream workflow node, uploads it to the specified bucket/key prefix, and can optionally return a presigned URL so later nodes can fetch the object without AWS credentials.
111
-
-**S3 File Download** – Fetches objects from S3; either returns a presigned URL or streams the binary into the workflow along with a variable containing bucket/key metadata for downstream nodes.
112
-
-**DynamoDB Manager** – Offers PAY_PER_REQUEST table creation plus `put_item`, `get_item`, and `delete_item`, supporting custom partition/sort keys and JSON `item_data` payloads.
195
+
196
+
- **CloudFront Create Invalidation** – Submits `create_invalidation` for a distribution. Accepts either `paths` (e.g., `["/*"]` or `["/index.html", "/css/*"]`) or an `invalidation_batch` JSON, and optional `caller_reference`; defaults invalidate all paths.
197
+
198
+
```json
199
+
{
200
+
"distribution_id": "D123456"
201
+
}
202
+
```
203
+
204
+
```json
205
+
{
206
+
"distribution_id": "D123456",
207
+
"paths": ["/index.html", "/css/*"]
208
+
}
209
+
```
210
+
211
+
```json
212
+
{
213
+
"distribution_id": "D123456",
214
+
"caller_reference": "my-ref-1",
215
+
"invalidation_batch": {
216
+
"Paths": {
217
+
"Items": ["/*"]
218
+
}
219
+
}
220
+
}
221
+
```
222
+
223
+
- **S3 File Uploader** – Uploads a workflow file to the specified bucket/key and can return a presigned URL.
224
+
225
+
```json
226
+
{
227
+
"bucket_name": "my-bucket",
228
+
"object_key": "uploads/example.txt",
229
+
"file": "{{file}}",
230
+
"return_presigned_url": true
231
+
}
232
+
```
233
+
234
+
- **S3 Operator (write)** – Reads or writes text content to `s3://` URIs; this example writes JSON text.
235
+
236
+
```json
237
+
{
238
+
"operation": "write",
239
+
"s3_uri": "s3://my-bucket/config.json",
240
+
"text": "{\"env\":\"prod\"}"
241
+
}
242
+
```
243
+
244
+
- **S3 File Download** – Fetches objects from S3; returns a presigned URL or streams binary (use `presign_only` / `download_mode`).
245
+
246
+
```json
247
+
{
248
+
"bucket_name": "my-bucket",
249
+
"object_key": "reports/latest.pdf",
250
+
"presign_only": true,
251
+
"expires_in": 600
252
+
}
253
+
```
254
+
255
+
- **DynamoDB Manager** – Creates PAY_PER_REQUEST tables and supports `put_item` / `get_item` / `delete_item` with JSON `item_data`.
256
+
257
+
```json
258
+
{
259
+
"operation": "put_item",
260
+
"table_name": "users",
261
+
"partition_key_name": "user_id",
262
+
"item_data": {
263
+
"user_id": "u-1",
264
+
"name": "Alice"
265
+
}
266
+
}
267
+
```
113
268
114
269
### Messaging
115
-
-**SNS Publish** – Publishes messages to an SNS topic ARN with optional subject (100 chars) and MessageAttributes JSON. Supports per-tool AWS credentials/region overrides.
116
-
-**SQS Send Message** – Sends a message to an SQS queue URL with optional delay seconds and MessageAttributes JSON. Supports per-tool AWS credentials/region overrides.
270
+
271
+
- **SNS Publish** – Publishes to an SNS topic ARN with optional subject and MessageAttributes.
-**AgentCore Memory** – Creates memory resources via the AgentCore SDK, records conversations when `operation=record`, and fetches history with `get_last_k_turns` when `operation=retrieve`. Missing IDs are created automatically and returned as JSON.
120
-
-**AgentCore Memory Search** – Executes `retrieve_memories` for a given memory ID/namespace, limits the results to the requested top_k, and serializes timestamps to ISO 8601.
121
-
-**Agentcore Code Interpreter** – Launches Bedrock AgentCore Code Interpreter sessions, optionally creates the interpreter, executes shell commands (`command`) and language-specific code (`language` + `code`), and returns IDs with the results.
292
+
293
+
- **Agentcore Code Interpreter** – Creates/uses an interpreter session to run shell commands or code.
294
+
295
+
```json
296
+
{
297
+
"operation": "execute",
298
+
"code": "print(1+1)",
299
+
"language": "python"
300
+
}
301
+
```
302
+
303
+
- **AgentCore Memory Search** – Calls `retrieve_memories` for a memory/namespace with top_k limit.
304
+
305
+
```json
306
+
{
307
+
"memory_id": "mem-abc",
308
+
"namespace": "default",
309
+
"query": "error logs",
310
+
"top_k": 5
311
+
}
312
+
```
313
+
314
+
- **AgentCore Memory** – Creates memories and records/retrieves turns; supply `operation=record` or `retrieve`.
315
+
316
+
```json
317
+
{
318
+
"operation": "record",
319
+
"memory_id": "mem-123",
320
+
"actor_id": "user",
321
+
"role": "user",
322
+
"content": "Hello!"
323
+
}
324
+
```
122
325
123
326
### Other Notes
327
+
124
328
- **Lambda YAML to JSON** – Lightweight wrapper for reusing your Lambda workloads from workflows.
329
+
125
330
- **Lambda Invoker** – Calls any Lambda function name or ARN with a JSON payload, optional qualifier, per-call credentials, and tail logs for quick serverless utilities.
331
+
332
+
```json
333
+
{
334
+
"lambda_name": "my-function",
335
+
"payload_json": {"action": "ping"},
336
+
"invocation_type": "RequestResponse",
337
+
"include_logs": true
338
+
}
339
+
```
340
+
126
341
- **Step Functions Start Execution** – Starts a state machine by ARN, passing execution input, optional name, trace header, and tags so agents can fan out or orchestrate long-running jobs.
127
-
-**Nova Canvas / Nova Reel** – Provide the image and video pipelines described above for immediate invocation as Dify tools.
The plugin is designed to interact with AWS services (such as Bedrock, Lambda, S3, and DynamoDB) on your behalf. It does not collect analytics or telemetry beyond what is required to fulfill the tool invocations you issue.
132
354
133
355
### Data Collection
356
+
134
357
- **User-supplied inputs.** Text prompts, speech/audio URLs, translation requests, Lambda payloads, and other parameters that you pass to the tools are sent to the corresponding AWS service only for the purpose of executing that tool invocation.
135
358
- **Configuration metadata.** Optional AWS credentials (access key, secret key, region) may be provided either at the provider level or per tool. These values stay within the plugin runtime and are forwarded solely to AWS SDK clients to authenticate requests.
136
359
- **Generated outputs.** Responses received from AWS (e.g., Bedrock retrieve results or other tool outputs) are returned directly to Dify and are not stored elsewhere by this plugin.
137
360
- The plugin does **not** collect personally identifiable information unless included in the data that you explicitly send to the tools.
138
361
139
362
### Data Usage
363
+
140
364
- Inputs are transmitted to AWS services strictly to execute the selected tool (e.g., running Transcribe, retrieving from Bedrock KB, generating Nova images/videos, reranking documents).
141
365
- Outputs from AWS are returned to the Dify workflow or agent as-is. No secondary processing or analysis is performed beyond light formatting necessary for the Dify UI.
142
366
- The plugin does not sell, share, or reuse your data for any other purpose. Data is not used for model training by this plugin.
143
367
144
368
### Data Storage
369
+
145
370
- By default, the plugin does **not** store any user inputs or outputs on its own disk.
146
371
- Temporary files (e.g., downloaded GIFs for frame extraction) are written to local storage only for the duration of the request and deleted immediately after completion.
147
372
- Any persistent storage happens only when you instruct a tool to do so (e.g., writing a file to S3 or DynamoDB via the respective tools). In such cases the data resides in your AWS account under the resources you control.
148
373
149
374
### Third-party Services
375
+
150
376
- The plugin communicates exclusively with AWS services using the official AWS SDK (boto3) and, for browser automation, the Bedrock AgentCore Browser service plus Playwright. No other third-party APIs are contacted.
151
377
- When using OpenSearch, SageMaker, Bedrock, Lambda, Transcribe, Comprehend, S3, or DynamoDB tools, the data is transmitted directly to those AWS endpoints over HTTPS.
152
378
- Browser tooling stores connection metadata (WebSocket URLs, headers) in AWS Systems Manager Parameter Store in your account so that sessions can be reused. These parameters contain no additional user data beyond what is required to connect.
153
379
154
380
### Security
381
+
155
382
- All network calls to AWS services use HTTPS, and AWS credentials are loaded into boto3 clients only when needed. If you provide credentials via the provider settings, they remain in memory within the plugin runtime and are not persisted.
156
383
- Parameter Store entries created for AgentCore Browser sessions are stored in your AWS account and inherit the IAM policies you configure.
157
384
- The browser tool caches Playwright sessions in memory only for the life of the plugin process and cleans up resources when sessions are closed.
0 commit comments