Skip to content

Commit fbe1568

Browse files
committed
Change PyPI name back to flash-attn-4
1 parent d91ea94 commit fbe1568

File tree

4 files changed

+14
-9
lines changed

4 files changed

+14
-9
lines changed

.github/workflows/publish-fa4.yml

Lines changed: 11 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
name: Publish fa4 to PyPI
1+
name: Publish flash-attn-4 to PyPI
22

33
on:
44
push:
@@ -19,10 +19,13 @@ jobs:
1919
with:
2020
python-version: '3.12'
2121
- name: Install build dependencies
22-
run: pip install build
22+
run: pip install build twine
2323
- name: Build package
2424
run: python -m build
2525
working-directory: flash_attn/cute
26+
- name: Check package metadata
27+
run: twine check dist/*
28+
working-directory: flash_attn/cute
2629
- name: Store distribution packages
2730
uses: actions/upload-artifact@v4
2831
with:
@@ -47,14 +50,16 @@ jobs:
4750
publish-to-pypi:
4851
needs: build
4952
runs-on: ubuntu-latest
53+
environment:
54+
name: pypi
55+
url: https://pypi.org/p/flash-attn-4
56+
permissions:
57+
id-token: write
5058
steps:
5159
- name: Download distribution packages
5260
uses: actions/download-artifact@v4
5361
with:
5462
name: python-package-distributions
5563
path: dist/
5664
- name: Publish to PyPI
57-
env:
58-
TWINE_USERNAME: "__token__"
59-
TWINE_PASSWORD: ${{ secrets.PYPI_API_TOKEN }}
60-
run: pip install "twine>=6.1" "packaging>=24.2" && python -m twine upload dist/*
65+
uses: pypa/gh-action-pypi-publish@release/v1

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ FlashAttention-4 is written in CuTeDSL and optimized for Hopper and Blackwell GP
6868

6969
To install:
7070
```sh
71-
pip install fa4
71+
pip install flash-attn-4
7272
```
7373

7474
Once installed, you can use it as follows:

flash_attn/cute/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ FlashAttention-4 is a CuTeDSL-based implementation of FlashAttention for Hopper
55
## Installation
66

77
```sh
8-
pip install fa4
8+
pip install flash-attn-4
99
```
1010

1111
## Usage

flash_attn/cute/pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ requires = ["setuptools>=75", "setuptools-scm>=8"]
33
build-backend = "setuptools.build_meta"
44

55
[project]
6-
name = "fa4"
6+
name = "flash-attn-4"
77
dynamic = ["version"]
88
description = "Flash Attention CUTE (CUDA Template Engine) implementation"
99
readme = "README.md"

0 commit comments

Comments
 (0)