Skip to content

Commit 88a1ba7

Browse files
committed
Update new version 1.3.3
1 parent e34f5ff commit 88a1ba7

File tree

4 files changed

+93
-7
lines changed

4 files changed

+93
-7
lines changed

ChangeLog.md

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,20 @@
1+
# Version 1.3.3
2+
3+
### Update
4+
5+
Update ClassificationMetric:
6+
+ Rename confusion_matrix() in util file
7+
+ Support binary and multi-class classification with one-hot-encoder format
8+
+ Add Cohen's Kappa score
9+
+ Add Jaccard Similarity Index (Jaccard similarity coefficient)
10+
+ Add G-mean score
11+
+ Add GINI index
12+
+ Add ROC-AUC metric
13+
14+
15+
---------------------------------------------------------------------
16+
17+
118
# Version 1.3.2
219

320
### Update

README.md

Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33

44

55

6-
[![GitHub release](https://img.shields.io/badge/release-1.3.2-yellow.svg)](https://github.com/thieu1995/permetrics/releases)
6+
[![GitHub release](https://img.shields.io/badge/release-1.3.3-yellow.svg)](https://github.com/thieu1995/permetrics/releases)
77
[![Wheel](https://img.shields.io/pypi/wheel/gensim.svg)](https://pypi.python.org/pypi/permetrics)
88
[![PyPI version](https://badge.fury.io/py/permetrics.svg)](https://badge.fury.io/py/permetrics)
99
![PyPI - Python Version](https://img.shields.io/pypi/pyversions/permetrics.svg)
@@ -20,7 +20,7 @@
2020
PerMetrics is a python library for performance metrics of machine learning models. We aim to implement all performance metrics for problems such as regression, classification, clustering, ... problems. Helping users in all field access metrics as fast as possible
2121

2222
* **Free software:** Apache License, Version 2.0
23-
* **Total metrics**: 68 (47 regression metrics, 11 classification metrics)
23+
* **Total metrics**: 63 (47 regression metrics, 16 classification metrics)
2424
* **Documentation:** https://permetrics.readthedocs.io/en/latest/
2525
* **Python versions:** 3.6.x, 3.7.x, 3.8.x, 3.9.x, 3.10.x
2626
* **Dependencies:** numpy
@@ -42,7 +42,7 @@ PerMetrics is a python library for performance metrics of machine learning model
4242
### Install with pip
4343
Install the [current PyPI release](https://pypi.python.org/pypi/permetrics):
4444
```sh
45-
$ pip install permetrics==1.3.2
45+
$ pip install permetrics==1.3.3
4646
```
4747

4848
Or install the development version from GitHub:
@@ -221,8 +221,12 @@ If you are using mealpy in your project, we would appreciate citations:
221221
| **** | 8 | SS | Specificity Score | Higher is better (Best = 1), Range = [0, 1] |
222222
| **** | 9 | MCC | Matthews Correlation Coefficient | Higher is better (Best = 1), Range = [-1, +1] |
223223
| **** | 10 | HL | Hamming Loss | Higher is better (Best = 1), Range = [0, 1] |
224-
| **** | 11 | LS | Lift Score | Higher is better (Best = +inf), Range = [0, +inf) |
225-
| **** | 12 |
224+
| **** | 11 | CKS | Cohen's kappa score | Higher is better (Best = +1), Range = [-1, +1] |
225+
| **** | 12 | JSI | Jaccard Similarity Coefficient | Higher is better (Best = +1), Range = [0, +1] |
226+
| **** | 13 | GMS | Geometric Mean Score | Higher is better (Best = +1), Range = [0, +1] |
227+
| **** | 14 | GINI | GINI Index | Higher is better (Best = +1), Range = [0, +1] |
228+
| **** | 15 | ROC-AUC | ROC-AUC | Higher is better (Best = +1), Range = [0, +1] |
229+
| **** | 16 | | | |
226230

227231

228232
# Future works

run.py

Lines changed: 65 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,65 @@
1+
#!/usr/bin/env python
2+
# Created by "Thieu" at 14:44, 05/02/2023 ----------%
3+
# Email: nguyenthieu2102@gmail.com %
4+
# Github: https://github.com/thieu1995 %
5+
# --------------------------------------------------%
6+
7+
import numpy as np
8+
from permetrics.classification import ClassificationMetric
9+
10+
## For integer labels or categorical labels
11+
y_true = [0, 1, 0, 0, 1, 0, 0, 0, 1, 2, 2, 2, 0]
12+
y_pred = [0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1]
13+
14+
# y_true = np.array([0, 1, 0, 0, 1, 0, 0, 0, 1, 2, 2, 2, 0])
15+
# y_pred = np.array([0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1])
16+
17+
# y_true = np.array([[0, 1, 0], [1, 0, 0], [0, 0, 1]])
18+
# y_pred = np.array([[0.1, 0.8, 0.1], [0.8, 0.1, 0.1], [0.1, 0.1, 0.8]])
19+
20+
# y_true = ["cat", "ant", "cat", "cat", "ant", "bird", "bird", "bird"]
21+
# y_pred = ["ant", "ant", "cat", "cat", "ant", "cat", "bird", "ant"]
22+
23+
# y_true = [["cat", "ant"], ["cat", "cat"], ["ant", "bird"], ["bird", "bird"]]
24+
# y_pred = [["ant", "ant"], ["cat", "cat"], ["ant", "cat"], ["bird", "ant"]]
25+
26+
# cm = ClassificationMetric(y_true, y_pred, decimal=5)
27+
# print(cm.jaccard_similarity_index(average=None))
28+
# print(cm.jaccard_similarity_coefficient(average="micro"))
29+
# print(cm.jsi(average="macro"))
30+
# print(cm.jsc(average="weighted"))
31+
32+
# cm = ClassificationMetric(y_true, y_pred, decimal=5)
33+
# print(cm.gini_index())
34+
# print(cm.cks(average="micro"))
35+
# print(cm.CKS(average="macro"))
36+
# print(cm.CKS(average="weighted"))
37+
38+
# print(cm.mcc(average=None))
39+
# print(cm.mcc(average="micro"))
40+
# print(cm.mcc(average="macro"))
41+
# print(cm.mcc(average="weighted"))
42+
43+
# Example true labels and predicted scores for a 3-class problem
44+
# y_true = np.array([0, 1, 2, 1, 2, 0, 0, 1])
45+
# y_score = np.array([[0.8, 0.1, 0.1],
46+
# [0.2, 0.5, 0.3],
47+
# [0.1, 0.3, 0.6],
48+
# [0.3, 0.7, 0.0],
49+
# [0.4, 0.3, 0.3],
50+
# [0.6, 0.2, 0.2],
51+
# [0.9, 0.1, 0.0],
52+
# [0.1, 0.8, 0.1]])
53+
#
54+
# # y_true = [0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0]
55+
# # y_score = [0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1]
56+
#
57+
# cm = ClassificationMetric(y_true, y_pred, decimal=5)
58+
# print(cm.roc_auc_score(y_true, y_score, average="weighted"))
59+
60+
61+
cm = ClassificationMetric(y_true, y_pred, decimal=5)
62+
print(cm.gini_index(average=None))
63+
print(cm.GINI(average="macro"))
64+
print(cm.gini(average="weighted"))
65+

setup.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,14 +9,14 @@ def readme():
99

1010
setup(
1111
name="permetrics",
12-
version="1.3.2",
12+
version="1.3.3",
1313
author="Nguyen Van Thieu",
1414
author_email="nguyenthieu2102@gmail.com",
1515
description="PerMetrics: A framework of PERformance METRICS for machine learning models",
1616
long_description=readme(),
1717
long_description_content_type="text/markdown",
1818
url="https://github.com/thieu1995/permetrics",
19-
download_url="https://github.com/thieu1995/permetrics/archive/v1.3.2.zip",
19+
download_url="https://github.com/thieu1995/permetrics/archive/v1.3.3.zip",
2020
packages=find_packages(exclude=['tests*', 'examples*']),
2121
include_package_data=True,
2222
license="MIT",

0 commit comments

Comments
 (0)