Skip to content

Commit ee062d5

Browse files
committed
v1.0.5 ADD cf_create_invalidation
1 parent 999f034 commit ee062d5

6 files changed

Lines changed: 732 additions & 22 deletions

File tree

README.md

Lines changed: 238 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# my_aws_tools
22

33
**Author:** r3-yamauchi
4-
**Version:** 1.0.4
4+
**Version:** 1.0.5
55
**Type:** tool
66

77
English | [Japanese](https://github.com/r3-yamauchi/dify-my-aws-tools-plugin/blob/main/readme/README_ja_JP.md)
@@ -41,6 +41,7 @@ Included tools:
4141
- S3 List Buckets
4242
- S3 Create Bucket
4343
- S3 List Objects
44+
- CloudFront Create Invalidation
4445
- DynamoDB Manager
4546
- Agentcore Code Interpreter
4647
- Agentcore Memory
@@ -53,8 +54,34 @@ This project is distributed under the Apache License 2.0. See `LICENSE` for the
5354
## Feature Highlights by Category
5455

5556
### Amazon Bedrock
57+
5658
- **Bedrock Retrieve** – Calls the `bedrock-agent-runtime` Retrieve API to run semantic or hybrid searches against a selected Knowledge Base. You can switch metadata filters, result counts, and Bedrock Reranking models (cohere.rerank-v3-5 / amazon.rerank-v1), and receive outputs as JSON or ranked text.
59+
60+
```json
61+
{
62+
"knowledge_base_id": "ABCDEFG8H9",
63+
"query": "latest product roadmap",
64+
"search_type": "HYBRID",
65+
"max_results": 5,
66+
"reranking_model": "amazon.rerank-v1"
67+
}
68+
```
69+
5770
- **Bedrock Retrieve and Generate** – Wraps `retrieve_and_generate` so KNOWLEDGE_BASE or EXTERNAL_SOURCES flows run in a single call. Supplying `session_configuration` and `session_id` lets Bedrock maintain session state, and the tool returns the text plus citation metadata.
71+
72+
```json
73+
{
74+
"knowledge_base_id": "ABCDEFG8H9",
75+
"query": "Summarize the incident response runbook",
76+
"generation_configuration": {
77+
"promptTemplate": "Using the KB, summarize concisely: {{query}}"
78+
},
79+
"retrieval_configuration": {
80+
"vectorSearchConfiguration": {"numberOfResults": 3}
81+
}
82+
}
83+
```
84+
5885
- **Apply Guardrail** – Uses Bedrock Runtime `apply_guardrail` with these features:
5986
- Inputs: `content` array (multiple texts and/or images via bytes or S3 URI) or a single `text` that is auto-chunked into 1000-character pieces and wrapped as content.
6087
- `source`: PREPROCESS (default) or POSTPROCESS to target pre/post LLM stages.
@@ -93,65 +120,265 @@ This project is distributed under the Apache License 2.0. See `LICENSE` for the
93120
]
94121
}
95122
```
123+
96124
- **Nova Canvas** – Invokes Nova Canvas v1 for TEXT_IMAGE, COLOR_GUIDED, IMAGE_VARIATION, INPAINTING, OUTPAINTING, and BACKGROUND_REMOVAL tasks. Input images are fetched from S3 and outputs are uploaded back while also streamed to Dify as PNG blobs.
125+
126+
```json
127+
{
128+
"task": "TEXT_IMAGE",
129+
"prompt": "A lighthouse during a storm",
130+
"output_s3_uri": "s3://my-bucket/outputs/canvas.png"
131+
}
132+
```
133+
97134
- **Nova Reel** – Uses Nova Reel v1 to create videos from text or from a seed image. Results are saved as MP4 files in the specified S3 path, and synchronous mode polls until completion to return the binary.
98135

136+
```json
137+
{
138+
"mode": "TEXT_TO_VIDEO",
139+
"prompt": "A drone flyover of snowy mountains",
140+
"output_s3_uri": "s3://my-bucket/outputs/reel.mp4",
141+
"wait_for_completion": true
142+
}
143+
```
144+
99145
### Audio & Media Processing
146+
100147
- **Extract Frame** – Downloads GIF animations and extracts evenly spaced PNG frames. Users choose the number of frames (from two for first/last to any higher count), and each frame is returned as binary output.
101148

149+
```json
150+
{
151+
"gif_url": "https://example.com/anim.gif",
152+
"frame_count": 4
153+
}
154+
```
155+
102156
- **Lambda YAML to JSON** – Calls a Lambda function synchronously with YAML text in the request body and returns the JSON body only when the Lambda responds with `statusCode` 200.
103157

158+
```yaml
159+
lambda_name: yaml-to-json
160+
yaml_content: |
161+
key: value
162+
list:
163+
- a
164+
- b
165+
```
166+
104167
- **Bedrock KB List** – Calls `list_knowledge_bases` to enumerate available knowledge bases, returning summaries (status, creation date, vector store) and pagination tokens for downstream filtering.
168+
169+
```json
170+
{
171+
"max_results": 20
172+
}
173+
```
174+
105175
- **Bedrock KB Data Sources** – Invokes `list_data_sources` for a given knowledge base, returning connector information, synchronization state, and pagination tokens so you can select the correct source before running ingestion jobs.
176+
177+
```json
178+
{
179+
"knowledge_base_id": "ABCDEFG8H9",
180+
"max_results": 10
181+
}
182+
```
183+
106184
- **Bedrock KB Sync** – Calls `StartIngestionJob` for a given knowledge base/data source pair so you can synchronize documents on demand, optionally setting a client token or deletion policy.
107185

186+
```json
187+
{
188+
"knowledge_base_id": "ABCDEFG8H9",
189+
"data_source_id": "ds-001",
190+
"client_token": "sync-20250227"
191+
}
192+
```
193+
108194
### Storage & Database Operations
109-
- **S3 Operator** – Reads or writes text content to `s3://` URIs and optionally produces presigned URLs. `write` uploads UTF-8 text; `read` returns either the text body or a presigned link.
110-
- **S3 File Uploader** – Accepts a file emitted by an upstream workflow node, uploads it to the specified bucket/key prefix, and can optionally return a presigned URL so later nodes can fetch the object without AWS credentials.
111-
- **S3 File Download** – Fetches objects from S3; either returns a presigned URL or streams the binary into the workflow along with a variable containing bucket/key metadata for downstream nodes.
112-
- **DynamoDB Manager** – Offers PAY_PER_REQUEST table creation plus `put_item`, `get_item`, and `delete_item`, supporting custom partition/sort keys and JSON `item_data` payloads.
195+
196+
- **CloudFront Create Invalidation** – Submits `create_invalidation` for a distribution. Accepts either `paths` (e.g., `["/*"]` or `["/index.html", "/css/*"]`) or an `invalidation_batch` JSON, and optional `caller_reference`; defaults invalidate all paths.
197+
198+
```json
199+
{
200+
"distribution_id": "D123456"
201+
}
202+
```
203+
204+
```json
205+
{
206+
"distribution_id": "D123456",
207+
"paths": ["/index.html", "/css/*"]
208+
}
209+
```
210+
211+
```json
212+
{
213+
"distribution_id": "D123456",
214+
"caller_reference": "my-ref-1",
215+
"invalidation_batch": {
216+
"Paths": {
217+
"Items": ["/*"]
218+
}
219+
}
220+
}
221+
```
222+
223+
- **S3 File Uploader** – Uploads a workflow file to the specified bucket/key and can return a presigned URL.
224+
225+
```json
226+
{
227+
"bucket_name": "my-bucket",
228+
"object_key": "uploads/example.txt",
229+
"file": "{{file}}",
230+
"return_presigned_url": true
231+
}
232+
```
233+
234+
- **S3 Operator (write)** – Reads or writes text content to `s3://` URIs; this example writes JSON text.
235+
236+
```json
237+
{
238+
"operation": "write",
239+
"s3_uri": "s3://my-bucket/config.json",
240+
"text": "{\"env\":\"prod\"}"
241+
}
242+
```
243+
244+
- **S3 File Download** – Fetches objects from S3; returns a presigned URL or streams binary (use `presign_only` / `download_mode`).
245+
246+
```json
247+
{
248+
"bucket_name": "my-bucket",
249+
"object_key": "reports/latest.pdf",
250+
"presign_only": true,
251+
"expires_in": 600
252+
}
253+
```
254+
255+
- **DynamoDB Manager** – Creates PAY_PER_REQUEST tables and supports `put_item` / `get_item` / `delete_item` with JSON `item_data`.
256+
257+
```json
258+
{
259+
"operation": "put_item",
260+
"table_name": "users",
261+
"partition_key_name": "user_id",
262+
"item_data": {
263+
"user_id": "u-1",
264+
"name": "Alice"
265+
}
266+
}
267+
```
113268

114269
### Messaging
115-
- **SNS Publish** – Publishes messages to an SNS topic ARN with optional subject (100 chars) and MessageAttributes JSON. Supports per-tool AWS credentials/region overrides.
116-
- **SQS Send Message** – Sends a message to an SQS queue URL with optional delay seconds and MessageAttributes JSON. Supports per-tool AWS credentials/region overrides.
270+
271+
- **SNS Publish** – Publishes to an SNS topic ARN with optional subject and MessageAttributes.
272+
273+
```json
274+
{
275+
"topic_arn": "arn:aws:sns:us-east-1:111122223333:alerts",
276+
"message": "Deployed v1.2.3",
277+
"subject": "Deploy notice"
278+
}
279+
```
280+
281+
- **SQS Send Message** – Sends to an SQS queue URL with optional delay and MessageAttributes.
282+
283+
```json
284+
{
285+
"queue_url": "https://sqs.us-east-1.amazonaws.com/111122223333/tasks",
286+
"message_body": "{\"job_id\":123}",
287+
"delay_seconds": 5
288+
}
289+
```
117290

118291
### AgentCore Integrations
119-
- **AgentCore Memory** – Creates memory resources via the AgentCore SDK, records conversations when `operation=record`, and fetches history with `get_last_k_turns` when `operation=retrieve`. Missing IDs are created automatically and returned as JSON.
120-
- **AgentCore Memory Search** – Executes `retrieve_memories` for a given memory ID/namespace, limits the results to the requested top_k, and serializes timestamps to ISO 8601.
121-
- **Agentcore Code Interpreter** – Launches Bedrock AgentCore Code Interpreter sessions, optionally creates the interpreter, executes shell commands (`command`) and language-specific code (`language` + `code`), and returns IDs with the results.
292+
293+
- **Agentcore Code Interpreter** – Creates/uses an interpreter session to run shell commands or code.
294+
295+
```json
296+
{
297+
"operation": "execute",
298+
"code": "print(1+1)",
299+
"language": "python"
300+
}
301+
```
302+
303+
- **AgentCore Memory Search** – Calls `retrieve_memories` for a memory/namespace with top_k limit.
304+
305+
```json
306+
{
307+
"memory_id": "mem-abc",
308+
"namespace": "default",
309+
"query": "error logs",
310+
"top_k": 5
311+
}
312+
```
313+
314+
- **AgentCore Memory** – Creates memories and records/retrieves turns; supply `operation=record` or `retrieve`.
315+
316+
```json
317+
{
318+
"operation": "record",
319+
"memory_id": "mem-123",
320+
"actor_id": "user",
321+
"role": "user",
322+
"content": "Hello!"
323+
}
324+
```
122325

123326
### Other Notes
327+
124328
- **Lambda YAML to JSON** – Lightweight wrapper for reusing your Lambda workloads from workflows.
329+
125330
- **Lambda Invoker** – Calls any Lambda function name or ARN with a JSON payload, optional qualifier, per-call credentials, and tail logs for quick serverless utilities.
331+
332+
```json
333+
{
334+
"lambda_name": "my-function",
335+
"payload_json": {"action": "ping"},
336+
"invocation_type": "RequestResponse",
337+
"include_logs": true
338+
}
339+
```
340+
126341
- **Step Functions Start Execution** – Starts a state machine by ARN, passing execution input, optional name, trace header, and tags so agents can fan out or orchestrate long-running jobs.
127-
- **Nova Canvas / Nova Reel** – Provide the image and video pipelines described above for immediate invocation as Dify tools.
342+
343+
```json
344+
{
345+
"state_machine_arn": "arn:aws:states:us-east-1:111122223333:stateMachine:MyFlow",
346+
"input_json": {"task": "sync"},
347+
"name": "run-001"
348+
}
349+
```
128350

129351
## Privacy Policy
130352

131353
The plugin is designed to interact with AWS services (such as Bedrock, Lambda, S3, and DynamoDB) on your behalf. It does not collect analytics or telemetry beyond what is required to fulfill the tool invocations you issue.
132354

133355
### Data Collection
356+
134357
- **User-supplied inputs.** Text prompts, speech/audio URLs, translation requests, Lambda payloads, and other parameters that you pass to the tools are sent to the corresponding AWS service only for the purpose of executing that tool invocation.
135358
- **Configuration metadata.** Optional AWS credentials (access key, secret key, region) may be provided either at the provider level or per tool. These values stay within the plugin runtime and are forwarded solely to AWS SDK clients to authenticate requests.
136359
- **Generated outputs.** Responses received from AWS (e.g., Bedrock retrieve results or other tool outputs) are returned directly to Dify and are not stored elsewhere by this plugin.
137360
- The plugin does **not** collect personally identifiable information unless included in the data that you explicitly send to the tools.
138361

139362
### Data Usage
363+
140364
- Inputs are transmitted to AWS services strictly to execute the selected tool (e.g., running Transcribe, retrieving from Bedrock KB, generating Nova images/videos, reranking documents).
141365
- Outputs from AWS are returned to the Dify workflow or agent as-is. No secondary processing or analysis is performed beyond light formatting necessary for the Dify UI.
142366
- The plugin does not sell, share, or reuse your data for any other purpose. Data is not used for model training by this plugin.
143367

144368
### Data Storage
369+
145370
- By default, the plugin does **not** store any user inputs or outputs on its own disk.
146371
- Temporary files (e.g., downloaded GIFs for frame extraction) are written to local storage only for the duration of the request and deleted immediately after completion.
147372
- Any persistent storage happens only when you instruct a tool to do so (e.g., writing a file to S3 or DynamoDB via the respective tools). In such cases the data resides in your AWS account under the resources you control.
148373

149374
### Third-party Services
375+
150376
- The plugin communicates exclusively with AWS services using the official AWS SDK (boto3) and, for browser automation, the Bedrock AgentCore Browser service plus Playwright. No other third-party APIs are contacted.
151377
- When using OpenSearch, SageMaker, Bedrock, Lambda, Transcribe, Comprehend, S3, or DynamoDB tools, the data is transmitted directly to those AWS endpoints over HTTPS.
152378
- Browser tooling stores connection metadata (WebSocket URLs, headers) in AWS Systems Manager Parameter Store in your account so that sessions can be reused. These parameters contain no additional user data beyond what is required to connect.
153379

154380
### Security
381+
155382
- All network calls to AWS services use HTTPS, and AWS credentials are loaded into boto3 clients only when needed. If you provide credentials via the provider settings, they remain in memory within the plugin runtime and are not persisted.
156383
- Parameter Store entries created for AgentCore Browser sessions are stored in your AWS account and inherit the IAM policies you configure.
157384
- The browser tool caches Playwright sessions in memory only for the life of the plugin process and cleans up resources when sessions are closed.

manifest.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
version: 1.0.4
1+
version: 1.0.5
22
type: plugin
33
author: r3-yamauchi
44
name: my_aws_tools

provider/my_aws_tools.yaml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -67,6 +67,7 @@ tools:
6767
- tools/agentcore_memory.yaml
6868
- tools/agentcore_memory_search.yaml
6969
- tools/agentcore_code_interpreter.yaml
70+
- tools/cf_create_invalidation.yaml
7071
extra:
7172
python:
7273
source: provider/my_aws_tools.py

0 commit comments

Comments
 (0)