* [gentoo-commits] repo/gentoo:master commit in: sci-libs/datasets/files/, sci-libs/datasets/
@ 2023-05-16 5:21 Alfredo Tupone
0 siblings, 0 replies; 5+ messages in thread
From: Alfredo Tupone @ 2023-05-16 5:21 UTC (permalink / raw
To: gentoo-commits
commit: da6e3eae5a4bc7472c759444ecbd9ed79cbfa4a2
Author: Alfredo Tupone <tupone <AT> gentoo <DOT> org>
AuthorDate: Tue May 16 05:20:21 2023 +0000
Commit: Alfredo Tupone <tupone <AT> gentoo <DOT> org>
CommitDate: Tue May 16 05:21:31 2023 +0000
URL: https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=da6e3eae
sci-libs/datasets: use seqeval for test
Signed-off-by: Alfredo Tupone <tupone <AT> gentoo.org>
.../{datasets-2.11.0-r1.ebuild => datasets-2.11.0-r2.ebuild} | 1 +
sci-libs/datasets/files/datasets-2.11.0-tests.patch | 10 ----------
2 files changed, 1 insertion(+), 10 deletions(-)
diff --git a/sci-libs/datasets/datasets-2.11.0-r1.ebuild b/sci-libs/datasets/datasets-2.11.0-r2.ebuild
similarity index 97%
rename from sci-libs/datasets/datasets-2.11.0-r1.ebuild
rename to sci-libs/datasets/datasets-2.11.0-r2.ebuild
index 977bf0d698b9..a2f4ad26e65b 100644
--- a/sci-libs/datasets/datasets-2.11.0-r1.ebuild
+++ b/sci-libs/datasets/datasets-2.11.0-r2.ebuild
@@ -43,6 +43,7 @@ BDEPEND="test? (
dev-python/pytest-datadir[${PYTHON_USEDEP}]
dev-python/decorator[${PYTHON_USEDEP}]
sci-libs/jiwer[${PYTHON_USEDEP}]
+ sci-libs/seqeval[${PYTHON_USEDEP}]
')
)"
diff --git a/sci-libs/datasets/files/datasets-2.11.0-tests.patch b/sci-libs/datasets/files/datasets-2.11.0-tests.patch
index 0babe8b23d58..e105c01bc63b 100644
--- a/sci-libs/datasets/files/datasets-2.11.0-tests.patch
+++ b/sci-libs/datasets/files/datasets-2.11.0-tests.patch
@@ -123,16 +123,6 @@
def test_dataset_with_audio_feature_map_undecoded(shared_datadir):
audio_path = str(shared_datadir / "test_audio_44100.wav")
data = {"audio": [audio_path]}
---- a/tests/test_metric_common.py 2023-05-06 13:20:24.496197629 +0200
-+++ b/tests/test_metric_common.py 2023-05-06 13:21:09.916732417 +0200
-@@ -210,6 +210,7 @@
- yield
-
-
-+@pytest.mark.skip(reason="require seqeval")
- def test_seqeval_raises_when_incorrect_scheme():
- metric = load_metric(os.path.join("metrics", "seqeval"))
- wrong_scheme = "ERROR"
--- a/tests/packaged_modules/test_audiofolder.py 2023-05-06 14:00:39.560876163 +0200
+++ b/tests/packaged_modules/test_audiofolder.py 2023-05-06 14:01:26.005212423 +0200
@@ -1,10 +1,8 @@
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [gentoo-commits] repo/gentoo:master commit in: sci-libs/datasets/files/, sci-libs/datasets/
@ 2023-08-26 6:28 Alfredo Tupone
0 siblings, 0 replies; 5+ messages in thread
From: Alfredo Tupone @ 2023-08-26 6:28 UTC (permalink / raw
To: gentoo-commits
commit: a3c14cfb38a777a44ecd9fbb195db5874c390809
Author: Alfredo Tupone <tupone <AT> gentoo <DOT> org>
AuthorDate: Sat Aug 26 06:27:20 2023 +0000
Commit: Alfredo Tupone <tupone <AT> gentoo <DOT> org>
CommitDate: Sat Aug 26 06:27:45 2023 +0000
URL: https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=a3c14cfb
sci-libs/datasets: add 2.14.4
Signed-off-by: Alfredo Tupone <tupone <AT> gentoo.org>
sci-libs/datasets/Manifest | 1 +
sci-libs/datasets/datasets-2.14.4.ebuild | 59 +++++
.../datasets/files/datasets-2.14.4-tests.patch | 242 +++++++++++++++++++++
3 files changed, 302 insertions(+)
diff --git a/sci-libs/datasets/Manifest b/sci-libs/datasets/Manifest
index 5895ad1a2004..ac1dc508ab82 100644
--- a/sci-libs/datasets/Manifest
+++ b/sci-libs/datasets/Manifest
@@ -1,3 +1,4 @@
DIST datasets-2.11.0.gh.tar.gz 2141289 BLAKE2B 0fb471dd6ee5de3831eb6586c4a15e67381262470b72d5ab02ee87dfc7977cb4d40e04da6507049d1e47cb8948cad11988bb7627293b48231e1cd413d2cfb885 SHA512 9ec2274d7978e3dde1b2f8ce78dd65bdf66742bbfee7b8672af46216aeaae3ef5c4604a8a5ea0bdee808f1c362cca9a122c16d2e9a161678148e581e4cd5c863
DIST datasets-2.12.0.gh.tar.gz 2149274 BLAKE2B 8f188901dfe293ac2b673f37e0d135e01a8f131adf9030ef1815ce2faa7ba0b36faf64a002cae1ced2d3ed5b7f50f43ba5cda90ab9254fd5f66bbfaed6085f3f SHA512 7389a1c6ee8ff4cda39a2c3f52218aa6f4b1cd6b45f48f83bfa2191359a8999d54153120d968b3cf7e5e932f88822783578e3d859dcb20f38fb0d915d88220c9
DIST datasets-2.13.1.gh.tar.gz 2166516 BLAKE2B 2269434b94145837e491ec6784218f6972df94a558b9067020076fb44dd937a103e3c57dd3761bb0a4cb3c3b6248299ec2a6c3f03c5bd016daaa8957591bf7b6 SHA512 3d2d1aad86b6a472cd6d0e6c661d4730cc0ed1a0fff55c739fc6a0ba68a8f53ae8789029553abd713d0b30648dd020f1880b2d8110c72b5c89a320c2b24f7752
+DIST datasets-2.14.4.gh.tar.gz 2142214 BLAKE2B d4c98a9f29ca748c3c20f32b9a89f053cf6327f56353341ba0073d3b5561ed9aea372d2fa74cadfa8b0f2ba0f6c2e9b3181cca9724719cfe3969f36bbb893f11 SHA512 c3a0701dd83474f4a0d839fe4ef56cfccc9f1d45b6506d44d0f9100bc9dbc90014d16c8e0090dc13f3b2d963bd96af45281bde6e3d7af230467ec7dd26204aa3
diff --git a/sci-libs/datasets/datasets-2.14.4.ebuild b/sci-libs/datasets/datasets-2.14.4.ebuild
new file mode 100644
index 000000000000..08ed796e9c2d
--- /dev/null
+++ b/sci-libs/datasets/datasets-2.14.4.ebuild
@@ -0,0 +1,59 @@
+# Copyright 2023 Gentoo Authors
+# Distributed under the terms of the GNU General Public License v2
+
+EAPI=8
+
+DISTUTILS_USE_PEP517=setuptools
+PYTHON_COMPAT=( python3_{9..11} )
+DISTUTILS_SINGLE_IMPL=1
+inherit distutils-r1
+
+DESCRIPTION="Access and share datasets for Audio, Computer Vision, and NLP tasks"
+HOMEPAGE="
+ https://pypi.org/project/datasets/
+"
+SRC_URI="https://github.com/huggingface/${PN}/archive/refs/tags/${PV}.tar.gz
+ -> ${P}.gh.tar.gz"
+IUSE="test"
+
+LICENSE="Apache-2.0"
+SLOT="0"
+KEYWORDS="~amd64"
+
+RDEPEND="
+ ${PYTHON_DEPS}
+ sci-libs/pytorch[${PYTHON_SINGLE_USEDEP}]
+ $(python_gen_cond_dep '
+ dev-python/absl-py[${PYTHON_USEDEP}]
+ dev-python/aiohttp[${PYTHON_USEDEP}]
+ dev-python/fsspec[${PYTHON_USEDEP}]
+ dev-python/multiprocess[${PYTHON_USEDEP}]
+ dev-python/pandas[${PYTHON_USEDEP}]
+ dev-python/pyarrow[${PYTHON_USEDEP},parquet,snappy]
+ dev-python/tqdm[${PYTHON_USEDEP}]
+ dev-python/xxhash[${PYTHON_USEDEP}]
+ dev-python/zstandard[${PYTHON_USEDEP}]
+ sci-libs/huggingface_hub[${PYTHON_USEDEP}]
+ sci-libs/scikit-learn[${PYTHON_USEDEP}]
+ ')
+"
+DEPEND="${RDEPEND}"
+BDEPEND="test? (
+ $(python_gen_cond_dep '
+ dev-python/pytest-datadir[${PYTHON_USEDEP}]
+ dev-python/decorator[${PYTHON_USEDEP}]
+ =dev-python/sqlalchemy-1*[${PYTHON_USEDEP}]
+ sci-libs/jiwer[${PYTHON_USEDEP}]
+ sci-libs/seqeval[${PYTHON_USEDEP}]
+ ')
+)"
+
+PATCHES=( "${FILESDIR}"/${P}-tests.patch )
+
+distutils_enable_tests pytest
+
+src_prepare() {
+ distutils-r1_src_prepare
+ rm tests/packaged_modules/test_spark.py || die
+ rm tests/test_upstream_hub.py || die
+}
diff --git a/sci-libs/datasets/files/datasets-2.14.4-tests.patch b/sci-libs/datasets/files/datasets-2.14.4-tests.patch
new file mode 100644
index 000000000000..5dd322309b20
--- /dev/null
+++ b/sci-libs/datasets/files/datasets-2.14.4-tests.patch
@@ -0,0 +1,242 @@
+--- a/tests/test_metric_common.py 2023-05-04 18:48:48.550861318 +0200
++++ b/tests/test_metric_common.py 2023-05-04 18:50:25.787364577 +0200
+@@ -93,6 +93,7 @@
+ INTENSIVE_CALLS_PATCHER = {}
+ metric_name = None
+
++ @pytest.mark.skip(reason="disabling, depends on bert_score, bleurt, math_equivalence, coval, nltk, faiss, mauve, rouge_score, sacrebleu, sacremoses ...")
+ @pytest.mark.filterwarnings("ignore:metric_module_factory is deprecated:FutureWarning")
+ @pytest.mark.filterwarnings("ignore:load_metric is deprecated:FutureWarning")
+ def test_load_metric(self, metric_name):
+--- a/tests/test_hf_gcp.py 2023-05-04 19:33:31.150825303 +0200
++++ b/tests/test_hf_gcp.py 2023-05-04 19:40:08.401759538 +0200
+@@ -75,6 +75,7 @@
+ self.assertTrue(os.path.exists(datset_info_path))
+
+
++@pytest.mark.skip(reason="require apache_beam")
+ @pytest.mark.integration
+ def test_as_dataset_from_hf_gcs(tmp_path_factory):
+ tmp_dir = tmp_path_factory.mktemp("test_hf_gcp") / "test_wikipedia_simple"
+--- a/tests/test_distributed.py 2023-05-04 19:43:09.861275030 +0200
++++ b/tests/test_distributed.py 2023-05-04 19:44:17.608326722 +0200
+@@ -74,6 +74,7 @@
+ split_dataset_by_node(full_ds.shuffle(), rank=0, world_size=world_size)
+
+
++@pytest.mark.skip(reason="require distributed torch")
+ @pytest.mark.parametrize("streaming", [False, True])
+ @require_torch
+ @pytest.mark.skipif(os.name == "nt", reason="execute_subprocess_async doesn't support windows")
+@@ -95,6 +96,7 @@
+ execute_subprocess_async(cmd, env=os.environ.copy())
+
+
++@pytest.mark.skip(reason="require distributed torch")
+ @pytest.mark.parametrize(
+ "nproc_per_node, num_workers",
+ [
+--- a/tests/utils.py 2023-05-06 08:43:16.251987543 +0200
++++ b/tests/utils.py 2023-05-06 08:44:24.467952870 +0200
+@@ -50,8 +50,8 @@
+ # Audio
+ require_sndfile = pytest.mark.skipif(
+ # On Windows and OS X, soundfile installs sndfile
+- find_spec("soundfile") is None or version.parse(importlib.metadata.version("soundfile")) < version.parse("0.12.0"),
+- reason="test requires sndfile>=0.12.1: 'pip install \"soundfile>=0.12.1\"'; ",
++ True,
++ reason="test requires librosa",
+ )
+
+ # Beam
+--- a/tests/features/test_audio.py 2023-05-06 09:03:58.680108142 +0200
++++ a/tests/features/test_audio.py 2023-05-06 09:05:50.463407967 +0200
+@@ -57,6 +57,7 @@
+ assert features.arrow_schema == pa.schema({"sequence_of_audios": pa.list_(Audio().pa_type)})
+
+
++@pytest.mark.skip(reason="require librosa")
+ @pytest.mark.parametrize(
+ "build_example",
+ [
+@@ -81,6 +82,7 @@
+ assert decoded_example.keys() == {"path", "array", "sampling_rate"}
+
+
++@pytest.mark.skip(reason="require librosa")
+ @pytest.mark.parametrize(
+ "build_example",
+ [
+@@ -148,6 +149,7 @@
+ assert decoded_example["sampling_rate"] == 48000
+
+
++@pytest.mark.skip(reason="require librosa")
+ @pytest.mark.parametrize("sampling_rate", [16_000, 48_000])
+ def test_audio_decode_example_pcm(shared_datadir, sampling_rate):
+ audio_path = str(shared_datadir / "test_audio_16000.pcm")
+@@ -414,6 +417,7 @@
+ assert column[0]["sampling_rate"] == 16000
+
+
++@pytest.mark.skip(reason="require librosa")
+ @pytest.mark.parametrize(
+ "build_data",
+ [
+@@ -438,6 +442,7 @@
+ assert item["audio"].keys() == {"path", "array", "sampling_rate"}
+
+
++@pytest.mark.skip(reason="require librosa")
+ def test_dataset_concatenate_audio_features(shared_datadir):
+ # we use a different data structure between 1 and 2 to make sure they are compatible with each other
+ audio_path = str(shared_datadir / "test_audio_44100.wav")
+@@ -451,6 +456,7 @@
+ assert concatenated_dataset[1]["audio"]["array"].shape == dset2[0]["audio"]["array"].shape
+
+
++@pytest.mark.skip(reason="require librosa")
+ def test_dataset_concatenate_nested_audio_features(shared_datadir):
+ # we use a different data structure between 1 and 2 to make sure they are compatible with each other
+ audio_path = str(shared_datadir / "test_audio_44100.wav")
+@@ -610,6 +616,7 @@
+ assert isinstance(ds, Dataset)
+
+
++@require_sndfile
+ def test_dataset_with_audio_feature_undecoded(shared_datadir):
+ audio_path = str(shared_datadir / "test_audio_44100.wav")
+ data = {"audio": [audio_path]}
+@@ -627,6 +634,7 @@
+ assert column[0] == {"path": audio_path, "bytes": None}
+
+
++@require_sndfile
+ def test_formatted_dataset_with_audio_feature_undecoded(shared_datadir):
+ audio_path = str(shared_datadir / "test_audio_44100.wav")
+ data = {"audio": [audio_path]}
+@@ -658,6 +666,7 @@
+ assert column[0] == {"path": audio_path, "bytes": None}
+
+
++@require_sndfile
+ def test_dataset_with_audio_feature_map_undecoded(shared_datadir):
+ audio_path = str(shared_datadir / "test_audio_44100.wav")
+ data = {"audio": [audio_path]}
+--- a/tests/packaged_modules/test_audiofolder.py 2023-05-06 14:00:39.560876163 +0200
++++ b/tests/packaged_modules/test_audiofolder.py 2023-05-06 14:01:26.005212423 +0200
+@@ -1,10 +1,8 @@
+ import shutil
+ import textwrap
+
+-import librosa
+ import numpy as np
+ import pytest
+-import soundfile as sf
+
+ from datasets import Audio, ClassLabel, Features, Value
+ from datasets.data_files import DataFilesDict, get_data_patterns
+@@ -192,8 +190,11 @@
+ return data_files_with_two_splits_and_metadata
+
+
++@pytest.mark.skip(reason="require soundfile")
+ @pytest.fixture
+ def data_files_with_zip_archives(tmp_path, audio_file):
++ import soundfile as sf
++ import librosa
+ data_dir = tmp_path / "audiofolder_data_dir_with_zip_archives"
+ data_dir.mkdir(parents=True, exist_ok=True)
+ archive_dir = data_dir / "archive"
+--- a/tests/test_arrow_dataset.py 2023-05-06 15:36:11.080459079 +0200
++++ b/tests/test_arrow_dataset.py 2023-05-06 15:38:07.452828528 +0200
+@@ -4136,6 +4136,7 @@
+ )
+ self.assertDictEqual(features_after_cast, dset.features)
+
++ @pytest.mark.skip(reason="require soundfile")
+ def test_task_automatic_speech_recognition(self):
+ # Include a dummy extra column `dummy` to test we drop it correctly
+ features_before_cast = Features(
+--- a/tests/test_streaming_download_manager.py 2023-08-26 07:33:41.937389401 +0200
++++ b/tests/test_streaming_download_manager.py 2023-08-26 07:37:22.521218698 +0200
+@@ -218,6 +218,7 @@
+ assert output_path == _readd_double_slash_removed_by_path(Path(expected_path).as_posix())
+
+
++@pytest.mark.skip(reason="not working in sandbox")
+ @pytest.mark.parametrize(
+ "input_path, exists",
+ [
+@@ -301,6 +302,7 @@
+ assert list(f) == TEST_URL_CONTENT.splitlines(keepends=True)
+
+
++@pytest.mark.skip(reason="not working in sandbox")
+ @pytest.mark.parametrize(
+ "input_path, expected_paths",
+ [
+@@ -331,6 +333,7 @@
+ xlistdir(root_url, download_config=download_config)
+
+
++@pytest.mark.skip(reason="not working in sandbox")
+ @pytest.mark.parametrize(
+ "input_path, isdir",
+ [
+@@ -358,6 +361,7 @@
+ assert xisdir(root_url, download_config=download_config) is False
+
+
++@pytest.mark.skip(reason="not working in sandbox")
+ @pytest.mark.parametrize(
+ "input_path, isfile",
+ [
+@@ -382,6 +386,7 @@
+ assert xisfile(root_url + "qwertyuiop", download_config=download_config) is False
+
+
++@pytest.mark.skip(reason="not working in sandbox")
+ @pytest.mark.parametrize(
+ "input_path, size",
+ [
+@@ -407,6 +412,7 @@
+ xgetsize(root_url + "qwertyuiop", download_config=download_config)
+
+
++@pytest.mark.skip(reason="not working in sandbox")
+ @pytest.mark.parametrize(
+ "input_path, expected_paths",
+ [
+@@ -450,6 +456,7 @@
+ assert len(xglob("zip://qwertyuiop/*::" + root_url, download_config=download_config)) == 0
+
+
++@pytest.mark.skip(reason="not working in sandbox")
+ @pytest.mark.parametrize(
+ "input_path, expected_outputs",
+ [
+@@ -540,6 +547,7 @@
+ def test_xpath_as_posix(self, input_path, expected_path):
+ assert xPath(input_path).as_posix() == expected_path
+
++ @pytest.mark.skip(reason="not working in sandbox")
+ @pytest.mark.parametrize(
+ "input_path, exists",
+ [
+@@ -555,6 +563,7 @@
+ (tmp_path / "file.txt").touch()
+ assert xexists(input_path) is exists
+
++ @pytest.mark.skip(reason="not working in sandbox")
+ @pytest.mark.parametrize(
+ "input_path, pattern, expected_paths",
+ [
+@@ -593,6 +602,7 @@
+ output_paths = sorted(xPath(input_path).glob(pattern))
+ assert output_paths == expected_paths
+
++ @pytest.mark.skip(reason="not working in sandbox")
+ @pytest.mark.parametrize(
+ "input_path, pattern, expected_paths",
+ [
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [gentoo-commits] repo/gentoo:master commit in: sci-libs/datasets/files/, sci-libs/datasets/
@ 2023-12-25 9:40 Alfredo Tupone
0 siblings, 0 replies; 5+ messages in thread
From: Alfredo Tupone @ 2023-12-25 9:40 UTC (permalink / raw
To: gentoo-commits
commit: 12a265407c6996b16a2f54e6ee32448fcfe74ea6
Author: Alfredo Tupone <tupone <AT> gentoo <DOT> org>
AuthorDate: Mon Dec 25 09:39:47 2023 +0000
Commit: Alfredo Tupone <tupone <AT> gentoo <DOT> org>
CommitDate: Mon Dec 25 09:40:29 2023 +0000
URL: https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=12a26540
sci-libs/datasets: 2.14.7 bump, remove old
Signed-off-by: Alfredo Tupone <tupone <AT> gentoo.org>
sci-libs/datasets/Manifest | 5 +-
sci-libs/datasets/datasets-2.11.0-r2.ebuild | 52 -----
sci-libs/datasets/datasets-2.12.0.ebuild | 53 -----
sci-libs/datasets/datasets-2.13.1.ebuild | 59 -----
...tasets-2.14.4.ebuild => datasets-2.14.7.ebuild} | 13 +-
.../datasets/files/datasets-2.11.0-tests.patch | 242 ---------------------
.../datasets/files/datasets-2.12.0-tests.patch | 242 ---------------------
7 files changed, 11 insertions(+), 655 deletions(-)
diff --git a/sci-libs/datasets/Manifest b/sci-libs/datasets/Manifest
index ac1dc508ab82..aedfb1428ab5 100644
--- a/sci-libs/datasets/Manifest
+++ b/sci-libs/datasets/Manifest
@@ -1,4 +1 @@
-DIST datasets-2.11.0.gh.tar.gz 2141289 BLAKE2B 0fb471dd6ee5de3831eb6586c4a15e67381262470b72d5ab02ee87dfc7977cb4d40e04da6507049d1e47cb8948cad11988bb7627293b48231e1cd413d2cfb885 SHA512 9ec2274d7978e3dde1b2f8ce78dd65bdf66742bbfee7b8672af46216aeaae3ef5c4604a8a5ea0bdee808f1c362cca9a122c16d2e9a161678148e581e4cd5c863
-DIST datasets-2.12.0.gh.tar.gz 2149274 BLAKE2B 8f188901dfe293ac2b673f37e0d135e01a8f131adf9030ef1815ce2faa7ba0b36faf64a002cae1ced2d3ed5b7f50f43ba5cda90ab9254fd5f66bbfaed6085f3f SHA512 7389a1c6ee8ff4cda39a2c3f52218aa6f4b1cd6b45f48f83bfa2191359a8999d54153120d968b3cf7e5e932f88822783578e3d859dcb20f38fb0d915d88220c9
-DIST datasets-2.13.1.gh.tar.gz 2166516 BLAKE2B 2269434b94145837e491ec6784218f6972df94a558b9067020076fb44dd937a103e3c57dd3761bb0a4cb3c3b6248299ec2a6c3f03c5bd016daaa8957591bf7b6 SHA512 3d2d1aad86b6a472cd6d0e6c661d4730cc0ed1a0fff55c739fc6a0ba68a8f53ae8789029553abd713d0b30648dd020f1880b2d8110c72b5c89a320c2b24f7752
-DIST datasets-2.14.4.gh.tar.gz 2142214 BLAKE2B d4c98a9f29ca748c3c20f32b9a89f053cf6327f56353341ba0073d3b5561ed9aea372d2fa74cadfa8b0f2ba0f6c2e9b3181cca9724719cfe3969f36bbb893f11 SHA512 c3a0701dd83474f4a0d839fe4ef56cfccc9f1d45b6506d44d0f9100bc9dbc90014d16c8e0090dc13f3b2d963bd96af45281bde6e3d7af230467ec7dd26204aa3
+DIST datasets-2.14.7.gh.tar.gz 2145270 BLAKE2B b3196f75bd52432091052e63ccfc538072b30bead213c7ddc549724c8efedacdf6bb8934574220ee62e27a48240a769ad5e79c4e39cad92538dc6947f7f9bd2b SHA512 87ecaec34670af5b4879aaa85e730fc4ba376028e7ca033a556aec9ac55156f11252dd130c12dc160d5c3d5618fa8888072e46c7dcc01eed9c0e2e07657b0b74
diff --git a/sci-libs/datasets/datasets-2.11.0-r2.ebuild b/sci-libs/datasets/datasets-2.11.0-r2.ebuild
deleted file mode 100644
index a2f4ad26e65b..000000000000
--- a/sci-libs/datasets/datasets-2.11.0-r2.ebuild
+++ /dev/null
@@ -1,52 +0,0 @@
-# Copyright 2023 Gentoo Authors
-# Distributed under the terms of the GNU General Public License v2
-
-EAPI=8
-
-DISTUTILS_USE_PEP517=setuptools
-PYTHON_COMPAT=( python3_{9..11} )
-DISTUTILS_SINGLE_IMPL=1
-inherit distutils-r1
-
-DESCRIPTION="Access and share datasets for Audio, Computer Vision, and NLP tasks"
-HOMEPAGE="
- https://pypi.org/project/datasets/
-"
-SRC_URI="https://github.com/huggingface/${PN}/archive/refs/tags/${PV}.tar.gz
- -> ${P}.gh.tar.gz"
-IUSE="test"
-
-LICENSE="Apache-2.0"
-SLOT="0"
-KEYWORDS="~amd64"
-
-RDEPEND="
- ${PYTHON_DEPS}
- sci-libs/pytorch[${PYTHON_SINGLE_USEDEP}]
- $(python_gen_cond_dep '
- dev-python/absl-py[${PYTHON_USEDEP}]
- dev-python/aiohttp[${PYTHON_USEDEP}]
- dev-python/fsspec[${PYTHON_USEDEP}]
- dev-python/multiprocess[${PYTHON_USEDEP}]
- dev-python/pandas[${PYTHON_USEDEP}]
- dev-python/pyarrow[${PYTHON_USEDEP},parquet,snappy]
- dev-python/tqdm[${PYTHON_USEDEP}]
- dev-python/xxhash[${PYTHON_USEDEP}]
- dev-python/zstandard[${PYTHON_USEDEP}]
- sci-libs/huggingface_hub[${PYTHON_USEDEP}]
- sci-libs/scikit-learn[${PYTHON_USEDEP}]
- ')
-"
-DEPEND="${RDEPEND}"
-BDEPEND="test? (
- $(python_gen_cond_dep '
- dev-python/pytest-datadir[${PYTHON_USEDEP}]
- dev-python/decorator[${PYTHON_USEDEP}]
- sci-libs/jiwer[${PYTHON_USEDEP}]
- sci-libs/seqeval[${PYTHON_USEDEP}]
- ')
-)"
-
-PATCHES=( "${FILESDIR}"/${P}-tests.patch )
-
-distutils_enable_tests pytest
diff --git a/sci-libs/datasets/datasets-2.12.0.ebuild b/sci-libs/datasets/datasets-2.12.0.ebuild
deleted file mode 100644
index 66b609fd2b57..000000000000
--- a/sci-libs/datasets/datasets-2.12.0.ebuild
+++ /dev/null
@@ -1,53 +0,0 @@
-# Copyright 2023 Gentoo Authors
-# Distributed under the terms of the GNU General Public License v2
-
-EAPI=8
-
-DISTUTILS_USE_PEP517=setuptools
-PYTHON_COMPAT=( python3_{9..11} )
-DISTUTILS_SINGLE_IMPL=1
-inherit distutils-r1
-
-DESCRIPTION="Access and share datasets for Audio, Computer Vision, and NLP tasks"
-HOMEPAGE="
- https://pypi.org/project/datasets/
-"
-SRC_URI="https://github.com/huggingface/${PN}/archive/refs/tags/${PV}.tar.gz
- -> ${P}.gh.tar.gz"
-IUSE="test"
-
-LICENSE="Apache-2.0"
-SLOT="0"
-KEYWORDS="~amd64"
-
-RDEPEND="
- ${PYTHON_DEPS}
- sci-libs/pytorch[${PYTHON_SINGLE_USEDEP}]
- $(python_gen_cond_dep '
- dev-python/absl-py[${PYTHON_USEDEP}]
- dev-python/aiohttp[${PYTHON_USEDEP}]
- dev-python/fsspec[${PYTHON_USEDEP}]
- dev-python/multiprocess[${PYTHON_USEDEP}]
- dev-python/pandas[${PYTHON_USEDEP}]
- dev-python/pyarrow[${PYTHON_USEDEP},parquet,snappy]
- dev-python/tqdm[${PYTHON_USEDEP}]
- dev-python/xxhash[${PYTHON_USEDEP}]
- dev-python/zstandard[${PYTHON_USEDEP}]
- sci-libs/huggingface_hub[${PYTHON_USEDEP}]
- sci-libs/scikit-learn[${PYTHON_USEDEP}]
- ')
-"
-DEPEND="${RDEPEND}"
-BDEPEND="test? (
- $(python_gen_cond_dep '
- dev-python/pytest-datadir[${PYTHON_USEDEP}]
- dev-python/decorator[${PYTHON_USEDEP}]
- =dev-python/sqlalchemy-1*[${PYTHON_USEDEP}]
- sci-libs/jiwer[${PYTHON_USEDEP}]
- sci-libs/seqeval[${PYTHON_USEDEP}]
- ')
-)"
-
-PATCHES=( "${FILESDIR}"/${P}-tests.patch )
-
-distutils_enable_tests pytest
diff --git a/sci-libs/datasets/datasets-2.13.1.ebuild b/sci-libs/datasets/datasets-2.13.1.ebuild
deleted file mode 100644
index 60a16a43e361..000000000000
--- a/sci-libs/datasets/datasets-2.13.1.ebuild
+++ /dev/null
@@ -1,59 +0,0 @@
-# Copyright 2023 Gentoo Authors
-# Distributed under the terms of the GNU General Public License v2
-
-EAPI=8
-
-DISTUTILS_USE_PEP517=setuptools
-PYTHON_COMPAT=( python3_{9..11} )
-DISTUTILS_SINGLE_IMPL=1
-inherit distutils-r1
-
-DESCRIPTION="Access and share datasets for Audio, Computer Vision, and NLP tasks"
-HOMEPAGE="
- https://pypi.org/project/datasets/
-"
-SRC_URI="https://github.com/huggingface/${PN}/archive/refs/tags/${PV}.tar.gz
- -> ${P}.gh.tar.gz"
-IUSE="test"
-
-LICENSE="Apache-2.0"
-SLOT="0"
-KEYWORDS="~amd64"
-
-RDEPEND="
- ${PYTHON_DEPS}
- sci-libs/pytorch[${PYTHON_SINGLE_USEDEP}]
- $(python_gen_cond_dep '
- dev-python/absl-py[${PYTHON_USEDEP}]
- dev-python/aiohttp[${PYTHON_USEDEP}]
- dev-python/fsspec[${PYTHON_USEDEP}]
- dev-python/multiprocess[${PYTHON_USEDEP}]
- dev-python/pandas[${PYTHON_USEDEP}]
- dev-python/pyarrow[${PYTHON_USEDEP},parquet,snappy]
- dev-python/tqdm[${PYTHON_USEDEP}]
- dev-python/xxhash[${PYTHON_USEDEP}]
- dev-python/zstandard[${PYTHON_USEDEP}]
- sci-libs/huggingface_hub[${PYTHON_USEDEP}]
- sci-libs/scikit-learn[${PYTHON_USEDEP}]
- ')
-"
-DEPEND="${RDEPEND}"
-BDEPEND="test? (
- $(python_gen_cond_dep '
- dev-python/pytest-datadir[${PYTHON_USEDEP}]
- dev-python/decorator[${PYTHON_USEDEP}]
- =dev-python/sqlalchemy-1*[${PYTHON_USEDEP}]
- sci-libs/jiwer[${PYTHON_USEDEP}]
- sci-libs/seqeval[${PYTHON_USEDEP}]
- ')
-)"
-
-PATCHES=( "${FILESDIR}"/${PN}-2.12.0-tests.patch )
-
-distutils_enable_tests pytest
-
-src_prepare() {
- distutils-r1_src_prepare
- rm tests/packaged_modules/test_spark.py || die
- rm tests/test_upstream_hub.py || die
-}
diff --git a/sci-libs/datasets/datasets-2.14.4.ebuild b/sci-libs/datasets/datasets-2.14.7.ebuild
similarity index 76%
rename from sci-libs/datasets/datasets-2.14.4.ebuild
rename to sci-libs/datasets/datasets-2.14.7.ebuild
index 08ed796e9c2d..0fab7cd550c4 100644
--- a/sci-libs/datasets/datasets-2.14.4.ebuild
+++ b/sci-libs/datasets/datasets-2.14.7.ebuild
@@ -20,26 +20,30 @@ LICENSE="Apache-2.0"
SLOT="0"
KEYWORDS="~amd64"
+# For pin on fsspec see https://github.com/huggingface/datasets/issues/6333
RDEPEND="
${PYTHON_DEPS}
sci-libs/pytorch[${PYTHON_SINGLE_USEDEP}]
$(python_gen_cond_dep '
dev-python/absl-py[${PYTHON_USEDEP}]
dev-python/aiohttp[${PYTHON_USEDEP}]
- dev-python/fsspec[${PYTHON_USEDEP}]
+ <=dev-python/fsspec-2023.10.0[${PYTHON_USEDEP}]
dev-python/multiprocess[${PYTHON_USEDEP}]
+ dev-python/packaging[${PYTHON_USEDEP}]
dev-python/pandas[${PYTHON_USEDEP}]
dev-python/pyarrow[${PYTHON_USEDEP},parquet,snappy]
+ dev-python/pyyaml[${PYTHON_USEDEP}]
dev-python/tqdm[${PYTHON_USEDEP}]
dev-python/xxhash[${PYTHON_USEDEP}]
dev-python/zstandard[${PYTHON_USEDEP}]
- sci-libs/huggingface_hub[${PYTHON_USEDEP}]
+ >=sci-libs/huggingface_hub-0.14.0[${PYTHON_USEDEP}]
sci-libs/scikit-learn[${PYTHON_USEDEP}]
')
"
DEPEND="${RDEPEND}"
BDEPEND="test? (
$(python_gen_cond_dep '
+ dev-python/absl-py[${PYTHON_USEDEP}]
dev-python/pytest-datadir[${PYTHON_USEDEP}]
dev-python/decorator[${PYTHON_USEDEP}]
=dev-python/sqlalchemy-1*[${PYTHON_USEDEP}]
@@ -48,7 +52,7 @@ BDEPEND="test? (
')
)"
-PATCHES=( "${FILESDIR}"/${P}-tests.patch )
+PATCHES=( "${FILESDIR}"/${PN}-2.14.4-tests.patch )
distutils_enable_tests pytest
@@ -56,4 +60,7 @@ src_prepare() {
distutils-r1_src_prepare
rm tests/packaged_modules/test_spark.py || die
rm tests/test_upstream_hub.py || die
+ sed -i -e \
+ "/pyarrow_hotfix/d" \
+ src/datasets/features/features.py || die
}
diff --git a/sci-libs/datasets/files/datasets-2.11.0-tests.patch b/sci-libs/datasets/files/datasets-2.11.0-tests.patch
deleted file mode 100644
index e105c01bc63b..000000000000
--- a/sci-libs/datasets/files/datasets-2.11.0-tests.patch
+++ /dev/null
@@ -1,242 +0,0 @@
---- a/tests/test_metric_common.py 2023-05-04 18:48:48.550861318 +0200
-+++ b/tests/test_metric_common.py 2023-05-04 18:50:25.787364577 +0200
-@@ -93,6 +93,7 @@
- INTENSIVE_CALLS_PATCHER = {}
- metric_name = None
-
-+ @pytest.mark.skip(reason="disabling, depends on bert_score, bleurt, math_equivalence, coval, nltk, faiss, mauve, rouge_score, sacrebleu, sacremoses ...")
- def test_load_metric(self, metric_name):
- doctest.ELLIPSIS_MARKER = "[...]"
- metric_module = importlib.import_module(
---- a/tests/test_hf_gcp.py 2023-05-04 19:33:31.150825303 +0200
-+++ b/tests/test_hf_gcp.py 2023-05-04 19:40:08.401759538 +0200
-@@ -69,6 +69,7 @@
- self.assertTrue(os.path.exists(datset_info_path))
-
-
-+@pytest.mark.skip(reason="require apache_beam")
- @pytest.mark.integration
- def test_wikipedia_frr(tmp_path_factory):
- tmp_dir = tmp_path_factory.mktemp("test_hf_gcp") / "test_wikipedia_simple"
---- a/tests/test_distributed.py 2023-05-04 19:43:09.861275030 +0200
-+++ b/tests/test_distributed.py 2023-05-04 19:44:17.608326722 +0200
-@@ -55,6 +55,7 @@
- assert len({tuple(x.values()) for ds in datasets_per_rank for x in ds}) == full_size
-
-
-+@pytest.mark.skip(reason="require distributed torch")
- @pytest.mark.parametrize("streaming", [False, True])
- @require_torch
- @pytest.mark.skipif(os.name == "nt", reason="execute_subprocess_async doesn't support windows")
-@@ -76,6 +77,7 @@
- execute_subprocess_async(cmd, env=os.environ.copy())
-
-
-+@pytest.mark.skip(reason="require distributed torch")
- @pytest.mark.parametrize(
- "nproc_per_node, num_workers",
- [
---- a/tests/utils.py 2023-05-06 08:43:16.251987543 +0200
-+++ b/tests/utils.py 2023-05-06 08:44:24.467952870 +0200
-@@ -54,8 +54,8 @@
- # Audio
- require_sndfile = pytest.mark.skipif(
- # On Windows and OS X, soundfile installs sndfile
-- find_spec("soundfile") is None or version.parse(importlib_metadata.version("soundfile")) < version.parse("0.12.0"),
-- reason="test requires sndfile>=0.12.1: 'pip install \"soundfile>=0.12.1\"'; ",
-+ True,
-+ reason="test requires librosa",
- )
-
- # Beam
---- a/tests/features/test_audio.py 2023-05-06 09:03:58.680108142 +0200
-+++ a/tests/features/test_audio.py 2023-05-06 09:05:50.463407967 +0200
-@@ -57,6 +57,7 @@
- assert features.arrow_schema == pa.schema({"sequence_of_audios": pa.list_(Audio().pa_type)})
-
-
-+@pytest.mark.skip(reason="require librosa")
- @pytest.mark.parametrize(
- "build_example",
- [
-@@ -81,6 +82,7 @@
- assert decoded_example.keys() == {"path", "array", "sampling_rate"}
-
-
-+@pytest.mark.skip(reason="require librosa")
- @pytest.mark.parametrize(
- "build_example",
- [
-@@ -148,6 +149,7 @@
- assert decoded_example["sampling_rate"] == 48000
-
-
-+@pytest.mark.skip(reason="require librosa")
- @pytest.mark.parametrize("sampling_rate", [16_000, 48_000])
- def test_audio_decode_example_pcm(shared_datadir, sampling_rate):
- audio_path = str(shared_datadir / "test_audio_16000.pcm")
-@@ -414,6 +417,7 @@
- assert column[0]["sampling_rate"] == 16000
-
-
-+@pytest.mark.skip(reason="require librosa")
- @pytest.mark.parametrize(
- "build_data",
- [
-@@ -438,6 +442,7 @@
- assert item["audio"].keys() == {"path", "array", "sampling_rate"}
-
-
-+@pytest.mark.skip(reason="require librosa")
- def test_dataset_concatenate_audio_features(shared_datadir):
- # we use a different data structure between 1 and 2 to make sure they are compatible with each other
- audio_path = str(shared_datadir / "test_audio_44100.wav")
-@@ -451,6 +456,7 @@
- assert concatenated_dataset[1]["audio"]["array"].shape == dset2[0]["audio"]["array"].shape
-
-
-+@pytest.mark.skip(reason="require librosa")
- def test_dataset_concatenate_nested_audio_features(shared_datadir):
- # we use a different data structure between 1 and 2 to make sure they are compatible with each other
- audio_path = str(shared_datadir / "test_audio_44100.wav")
-@@ -610,6 +616,7 @@
- assert isinstance(ds, Dataset)
-
-
-+@require_sndfile
- def test_dataset_with_audio_feature_undecoded(shared_datadir):
- audio_path = str(shared_datadir / "test_audio_44100.wav")
- data = {"audio": [audio_path]}
-@@ -627,6 +634,7 @@
- assert column[0] == {"path": audio_path, "bytes": None}
-
-
-+@require_sndfile
- def test_formatted_dataset_with_audio_feature_undecoded(shared_datadir):
- audio_path = str(shared_datadir / "test_audio_44100.wav")
- data = {"audio": [audio_path]}
-@@ -658,6 +666,7 @@
- assert column[0] == {"path": audio_path, "bytes": None}
-
-
-+@require_sndfile
- def test_dataset_with_audio_feature_map_undecoded(shared_datadir):
- audio_path = str(shared_datadir / "test_audio_44100.wav")
- data = {"audio": [audio_path]}
---- a/tests/packaged_modules/test_audiofolder.py 2023-05-06 14:00:39.560876163 +0200
-+++ b/tests/packaged_modules/test_audiofolder.py 2023-05-06 14:01:26.005212423 +0200
-@@ -1,10 +1,8 @@
- import shutil
- import textwrap
-
--import librosa
- import numpy as np
- import pytest
--import soundfile as sf
-
- from datasets import Audio, ClassLabel, Features, Value
- from datasets.data_files import DataFilesDict, get_data_patterns_locally
-@@ -192,8 +190,11 @@
- return data_files_with_two_splits_and_metadata
-
-
-+@pytest.mark.skip(reason="require soundfile")
- @pytest.fixture
- def data_files_with_zip_archives(tmp_path, audio_file):
-+ import soundfile as sf
-+ import librosa
- data_dir = tmp_path / "audiofolder_data_dir_with_zip_archives"
- data_dir.mkdir(parents=True, exist_ok=True)
- archive_dir = data_dir / "archive"
---- a/tests/test_arrow_dataset.py 2023-05-06 15:36:11.080459079 +0200
-+++ b/tests/test_arrow_dataset.py 2023-05-06 15:38:07.452828528 +0200
-@@ -3928,6 +3928,7 @@
- )
- self.assertDictEqual(features_after_cast, dset.features)
-
-+ @pytest.mark.skip(reason="require soundfile")
- def test_task_automatic_speech_recognition(self):
- # Include a dummy extra column `dummy` to test we drop it correctly
- features_before_cast = Features(
---- a/tests/test_streaming_download_manager.py 2023-05-15 23:06:59.146379973 +0200
-+++ b/tests/test_streaming_download_manager.py 2023-05-15 23:11:32.441363757 +0200
-@@ -217,6 +217,7 @@
- assert output_path == _readd_double_slash_removed_by_path(Path(expected_path).as_posix())
-
-
-+@pytest.mark.skip(reason="not working in sandbox")
- @pytest.mark.parametrize(
- "input_path, exists",
- [
-@@ -299,6 +300,7 @@
- assert list(f) == TEST_URL_CONTENT.splitlines(keepends=True)
-
-
-+@pytest.mark.skip(reason="not working in sandbox")
- @pytest.mark.parametrize(
- "input_path, expected_paths",
- [
-@@ -328,6 +330,7 @@
- xlistdir(root_url, use_auth_token=hf_token)
-
-
-+@pytest.mark.skip(reason="not working in sandbox")
- @pytest.mark.parametrize(
- "input_path, isdir",
- [
-@@ -355,6 +358,7 @@
- xisdir(root_url, use_auth_token=hf_token)
-
-
-+@pytest.mark.skip(reason="not working in sandbox")
- @pytest.mark.parametrize(
- "input_path, isfile",
- [
-@@ -378,6 +382,7 @@
- assert xisfile(root_url + "qwertyuiop", use_auth_token=hf_token) is False
-
-
-+@pytest.mark.skip(reason="not working in sandbox")
- @pytest.mark.parametrize(
- "input_path, size",
- [
-@@ -402,6 +407,7 @@
- xgetsize(root_url + "qwertyuiop", use_auth_token=hf_token)
-
-
-+@pytest.mark.skip(reason="not working in sandbox")
- @pytest.mark.parametrize(
- "input_path, expected_paths",
- [
-@@ -444,6 +450,7 @@
- assert len(xglob("zip://qwertyuiop/*::" + root_url, use_auth_token=hf_token)) == 0
-
-
-+@pytest.mark.skip(reason="not working in sandbox")
- @pytest.mark.parametrize(
- "input_path, expected_outputs",
- [
-@@ -533,6 +540,7 @@
- def test_xpath_as_posix(self, input_path, expected_path):
- assert xPath(input_path).as_posix() == expected_path
-
-+ @pytest.mark.skip(reason="not working in sandbox")
- @pytest.mark.parametrize(
- "input_path, exists",
- [
-@@ -548,6 +556,7 @@
- (tmp_path / "file.txt").touch()
- assert xexists(input_path) is exists
-
-+ @pytest.mark.skip(reason="not working in sandbox")
- @pytest.mark.parametrize(
- "input_path, pattern, expected_paths",
- [
-@@ -586,6 +595,7 @@
- output_paths = sorted(xPath(input_path).glob(pattern))
- assert output_paths == expected_paths
-
-+ @pytest.mark.skip(reason="not working in sandbox")
- @pytest.mark.parametrize(
- "input_path, pattern, expected_paths",
- [
diff --git a/sci-libs/datasets/files/datasets-2.12.0-tests.patch b/sci-libs/datasets/files/datasets-2.12.0-tests.patch
deleted file mode 100644
index 6be3156bb70d..000000000000
--- a/sci-libs/datasets/files/datasets-2.12.0-tests.patch
+++ /dev/null
@@ -1,242 +0,0 @@
---- a/tests/test_metric_common.py 2023-05-04 18:48:48.550861318 +0200
-+++ b/tests/test_metric_common.py 2023-05-04 18:50:25.787364577 +0200
-@@ -93,6 +93,7 @@
- INTENSIVE_CALLS_PATCHER = {}
- metric_name = None
-
-+ @pytest.mark.skip(reason="disabling, depends on bert_score, bleurt, math_equivalence, coval, nltk, faiss, mauve, rouge_score, sacrebleu, sacremoses ...")
- @pytest.mark.filterwarnings("ignore:metric_module_factory is deprecated:FutureWarning")
- @pytest.mark.filterwarnings("ignore:load_metric is deprecated:FutureWarning")
- def test_load_metric(self, metric_name):
---- a/tests/test_hf_gcp.py 2023-05-04 19:33:31.150825303 +0200
-+++ b/tests/test_hf_gcp.py 2023-05-04 19:40:08.401759538 +0200
-@@ -75,6 +75,7 @@
- self.assertTrue(os.path.exists(datset_info_path))
-
-
-+@pytest.mark.skip(reason="require apache_beam")
- @pytest.mark.integration
- def test_as_dataset_from_hf_gcs(tmp_path_factory):
- tmp_dir = tmp_path_factory.mktemp("test_hf_gcp") / "test_wikipedia_simple"
---- a/tests/test_distributed.py 2023-05-04 19:43:09.861275030 +0200
-+++ b/tests/test_distributed.py 2023-05-04 19:44:17.608326722 +0200
-@@ -74,6 +74,7 @@
- split_dataset_by_node(full_ds.shuffle(), rank=0, world_size=world_size)
-
-
-+@pytest.mark.skip(reason="require distributed torch")
- @pytest.mark.parametrize("streaming", [False, True])
- @require_torch
- @pytest.mark.skipif(os.name == "nt", reason="execute_subprocess_async doesn't support windows")
-@@ -95,6 +96,7 @@
- execute_subprocess_async(cmd, env=os.environ.copy())
-
-
-+@pytest.mark.skip(reason="require distributed torch")
- @pytest.mark.parametrize(
- "nproc_per_node, num_workers",
- [
---- a/tests/utils.py 2023-05-06 08:43:16.251987543 +0200
-+++ b/tests/utils.py 2023-05-06 08:44:24.467952870 +0200
-@@ -55,8 +55,8 @@
- # Audio
- require_sndfile = pytest.mark.skipif(
- # On Windows and OS X, soundfile installs sndfile
-- find_spec("soundfile") is None or version.parse(importlib_metadata.version("soundfile")) < version.parse("0.12.0"),
-- reason="test requires sndfile>=0.12.1: 'pip install \"soundfile>=0.12.1\"'; ",
-+ True,
-+ reason="test requires librosa",
- )
-
- # Beam
---- a/tests/features/test_audio.py 2023-05-06 09:03:58.680108142 +0200
-+++ a/tests/features/test_audio.py 2023-05-06 09:05:50.463407967 +0200
-@@ -57,6 +57,7 @@
- assert features.arrow_schema == pa.schema({"sequence_of_audios": pa.list_(Audio().pa_type)})
-
-
-+@pytest.mark.skip(reason="require librosa")
- @pytest.mark.parametrize(
- "build_example",
- [
-@@ -81,6 +82,7 @@
- assert decoded_example.keys() == {"path", "array", "sampling_rate"}
-
-
-+@pytest.mark.skip(reason="require librosa")
- @pytest.mark.parametrize(
- "build_example",
- [
-@@ -148,6 +149,7 @@
- assert decoded_example["sampling_rate"] == 48000
-
-
-+@pytest.mark.skip(reason="require librosa")
- @pytest.mark.parametrize("sampling_rate", [16_000, 48_000])
- def test_audio_decode_example_pcm(shared_datadir, sampling_rate):
- audio_path = str(shared_datadir / "test_audio_16000.pcm")
-@@ -414,6 +417,7 @@
- assert column[0]["sampling_rate"] == 16000
-
-
-+@pytest.mark.skip(reason="require librosa")
- @pytest.mark.parametrize(
- "build_data",
- [
-@@ -438,6 +442,7 @@
- assert item["audio"].keys() == {"path", "array", "sampling_rate"}
-
-
-+@pytest.mark.skip(reason="require librosa")
- def test_dataset_concatenate_audio_features(shared_datadir):
- # we use a different data structure between 1 and 2 to make sure they are compatible with each other
- audio_path = str(shared_datadir / "test_audio_44100.wav")
-@@ -451,6 +456,7 @@
- assert concatenated_dataset[1]["audio"]["array"].shape == dset2[0]["audio"]["array"].shape
-
-
-+@pytest.mark.skip(reason="require librosa")
- def test_dataset_concatenate_nested_audio_features(shared_datadir):
- # we use a different data structure between 1 and 2 to make sure they are compatible with each other
- audio_path = str(shared_datadir / "test_audio_44100.wav")
-@@ -610,6 +616,7 @@
- assert isinstance(ds, Dataset)
-
-
-+@require_sndfile
- def test_dataset_with_audio_feature_undecoded(shared_datadir):
- audio_path = str(shared_datadir / "test_audio_44100.wav")
- data = {"audio": [audio_path]}
-@@ -627,6 +634,7 @@
- assert column[0] == {"path": audio_path, "bytes": None}
-
-
-+@require_sndfile
- def test_formatted_dataset_with_audio_feature_undecoded(shared_datadir):
- audio_path = str(shared_datadir / "test_audio_44100.wav")
- data = {"audio": [audio_path]}
-@@ -658,6 +666,7 @@
- assert column[0] == {"path": audio_path, "bytes": None}
-
-
-+@require_sndfile
- def test_dataset_with_audio_feature_map_undecoded(shared_datadir):
- audio_path = str(shared_datadir / "test_audio_44100.wav")
- data = {"audio": [audio_path]}
---- a/tests/packaged_modules/test_audiofolder.py 2023-05-06 14:00:39.560876163 +0200
-+++ b/tests/packaged_modules/test_audiofolder.py 2023-05-06 14:01:26.005212423 +0200
-@@ -1,10 +1,8 @@
- import shutil
- import textwrap
-
--import librosa
- import numpy as np
- import pytest
--import soundfile as sf
-
- from datasets import Audio, ClassLabel, Features, Value
- from datasets.data_files import DataFilesDict, get_data_patterns_locally
-@@ -192,8 +190,11 @@
- return data_files_with_two_splits_and_metadata
-
-
-+@pytest.mark.skip(reason="require soundfile")
- @pytest.fixture
- def data_files_with_zip_archives(tmp_path, audio_file):
-+ import soundfile as sf
-+ import librosa
- data_dir = tmp_path / "audiofolder_data_dir_with_zip_archives"
- data_dir.mkdir(parents=True, exist_ok=True)
- archive_dir = data_dir / "archive"
---- a/tests/test_arrow_dataset.py 2023-05-06 15:36:11.080459079 +0200
-+++ b/tests/test_arrow_dataset.py 2023-05-06 15:38:07.452828528 +0200
-@@ -3983,6 +3983,7 @@
- )
- self.assertDictEqual(features_after_cast, dset.features)
-
-+ @pytest.mark.skip(reason="require soundfile")
- def test_task_automatic_speech_recognition(self):
- # Include a dummy extra column `dummy` to test we drop it correctly
- features_before_cast = Features(
---- a/tests/test_streaming_download_manager.py 2023-05-15 23:06:59.146379973 +0200
-+++ b/tests/test_streaming_download_manager.py 2023-05-15 23:11:32.441363757 +0200
-@@ -217,6 +217,7 @@
- assert output_path == _readd_double_slash_removed_by_path(Path(expected_path).as_posix())
-
-
-+@pytest.mark.skip(reason="not working in sandbox")
- @pytest.mark.parametrize(
- "input_path, exists",
- [
-@@ -299,6 +300,7 @@
- assert list(f) == TEST_URL_CONTENT.splitlines(keepends=True)
-
-
-+@pytest.mark.skip(reason="not working in sandbox")
- @pytest.mark.parametrize(
- "input_path, expected_paths",
- [
-@@ -328,6 +330,7 @@
- xlistdir(root_url, use_auth_token=hf_token)
-
-
-+@pytest.mark.skip(reason="not working in sandbox")
- @pytest.mark.parametrize(
- "input_path, isdir",
- [
-@@ -355,6 +358,7 @@
- xisdir(root_url, use_auth_token=hf_token)
-
-
-+@pytest.mark.skip(reason="not working in sandbox")
- @pytest.mark.parametrize(
- "input_path, isfile",
- [
-@@ -378,6 +382,7 @@
- assert xisfile(root_url + "qwertyuiop", use_auth_token=hf_token) is False
-
-
-+@pytest.mark.skip(reason="not working in sandbox")
- @pytest.mark.parametrize(
- "input_path, size",
- [
-@@ -402,6 +407,7 @@
- xgetsize(root_url + "qwertyuiop", use_auth_token=hf_token)
-
-
-+@pytest.mark.skip(reason="not working in sandbox")
- @pytest.mark.parametrize(
- "input_path, expected_paths",
- [
-@@ -444,6 +450,7 @@
- assert len(xglob("zip://qwertyuiop/*::" + root_url, use_auth_token=hf_token)) == 0
-
-
-+@pytest.mark.skip(reason="not working in sandbox")
- @pytest.mark.parametrize(
- "input_path, expected_outputs",
- [
-@@ -533,6 +540,7 @@
- def test_xpath_as_posix(self, input_path, expected_path):
- assert xPath(input_path).as_posix() == expected_path
-
-+ @pytest.mark.skip(reason="not working in sandbox")
- @pytest.mark.parametrize(
- "input_path, exists",
- [
-@@ -548,6 +556,7 @@
- (tmp_path / "file.txt").touch()
- assert xexists(input_path) is exists
-
-+ @pytest.mark.skip(reason="not working in sandbox")
- @pytest.mark.parametrize(
- "input_path, pattern, expected_paths",
- [
-@@ -586,6 +595,7 @@
- output_paths = sorted(xPath(input_path).glob(pattern))
- assert output_paths == expected_paths
-
-+ @pytest.mark.skip(reason="not working in sandbox")
- @pytest.mark.parametrize(
- "input_path, pattern, expected_paths",
- [
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [gentoo-commits] repo/gentoo:master commit in: sci-libs/datasets/files/, sci-libs/datasets/
@ 2024-02-21 21:07 Alfredo Tupone
0 siblings, 0 replies; 5+ messages in thread
From: Alfredo Tupone @ 2024-02-21 21:07 UTC (permalink / raw
To: gentoo-commits
commit: 7938c21e268aff71e1d091dfbce1bfba8bde8308
Author: Alfredo Tupone <tupone <AT> gentoo <DOT> org>
AuthorDate: Wed Feb 21 21:07:04 2024 +0000
Commit: Alfredo Tupone <tupone <AT> gentoo <DOT> org>
CommitDate: Wed Feb 21 21:07:36 2024 +0000
URL: https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=7938c21e
sci-libs/datasets: drop test that require network
Closes: https://bugs.gentoo.org/925171
Signed-off-by: Alfredo Tupone <tupone <AT> gentoo.org>
sci-libs/datasets/datasets-2.16.0.ebuild | 14 ++
.../datasets/files/datasets-2.16.0-tests.patch | 160 +++++++++++++--------
2 files changed, 116 insertions(+), 58 deletions(-)
diff --git a/sci-libs/datasets/datasets-2.16.0.ebuild b/sci-libs/datasets/datasets-2.16.0.ebuild
index 0325b5ae63d6..a34fcaa2f89c 100644
--- a/sci-libs/datasets/datasets-2.16.0.ebuild
+++ b/sci-libs/datasets/datasets-2.16.0.ebuild
@@ -66,4 +66,18 @@ src_prepare() {
sed -i -e \
"/pyarrow_hotfix/d" \
src/datasets/features/features.py || die
+ sed -i \
+ -e "s:pytest.mark.integration:pytest.mark.skip():g" \
+ tests/test_arrow_dataset.py \
+ tests/test_fingerprint.py \
+ tests/test_hf_gcp.py \
+ tests/test_inspect.py \
+ tests/test_iterable_dataset.py \
+ tests/test_iterable_dataset.py \
+ tests/test_load.py \
+ tests/test_offline_util.py \
+ tests/test_streaming_download_manager.py \
+ tests/commands/test_test.py \
+ tests/packaged_modules/test_cache.py \
+ die
}
diff --git a/sci-libs/datasets/files/datasets-2.16.0-tests.patch b/sci-libs/datasets/files/datasets-2.16.0-tests.patch
index 6b2845bce168..8cb89e824b3b 100644
--- a/sci-libs/datasets/files/datasets-2.16.0-tests.patch
+++ b/sci-libs/datasets/files/datasets-2.16.0-tests.patch
@@ -10,51 +10,72 @@
],
--- a/tests/test_load.py 2024-02-20 22:12:13.699209107 +0100
+++ b/tests/test_load.py 2024-02-20 22:13:10.862626708 +0100
-@@ -386,21 +386,6 @@
+@@ -386,6 +386,7 @@
hf_modules_cache=self.hf_modules_cache,
)
-- def test_HubDatasetModuleFactoryWithScript_dont_trust_remote_code(self):
-- # "squad" has a dataset script
-- factory = HubDatasetModuleFactoryWithScript(
-- "squad", download_config=self.download_config, dynamic_modules_path=self.dynamic_modules_path
-- )
-- with patch.object(config, "HF_DATASETS_TRUST_REMOTE_CODE", None): # this will be the default soon
-- self.assertRaises(ValueError, factory.get_module)
-- factory = HubDatasetModuleFactoryWithScript(
-- "squad",
-- download_config=self.download_config,
-- dynamic_modules_path=self.dynamic_modules_path,
-- trust_remote_code=False,
-- )
-- self.assertRaises(ValueError, factory.get_module)
--
++ @pytest.mark.skip(reason="")
+ def test_HubDatasetModuleFactoryWithScript_dont_trust_remote_code(self):
+ # "squad" has a dataset script
+ factory = HubDatasetModuleFactoryWithScript(
+@@ -402,6 +402,7 @@
+ )
+ self.assertRaises(ValueError, factory.get_module)
+
++ @pytest.mark.skip()
def test_HubDatasetModuleFactoryWithScript_with_github_dataset(self):
# "wmt_t2t" has additional imports (internal)
factory = HubDatasetModuleFactoryWithScript(
-@@ -1235,12 +1235,6 @@
-
-
- @pytest.mark.integration
--def test_load_streaming_private_dataset_with_zipped_data(hf_token, hf_private_dataset_repo_zipped_txt_data):
-- ds = load_dataset(hf_private_dataset_repo_zipped_txt_data, streaming=True, token=hf_token)
-- assert next(iter(ds)) is not None
--
--
--@pytest.mark.integration
- def test_load_dataset_config_kwargs_passed_as_arguments():
- ds_default = load_dataset(SAMPLE_DATASET_IDENTIFIER4)
- ds_custom = load_dataset(SAMPLE_DATASET_IDENTIFIER4, drop_metadata=True)
+@@ -411,6 +412,7 @@
+ assert importlib.import_module(module_factory_result.module_path) is not None
+ assert module_factory_result.builder_kwargs["base_path"].startswith(config.HF_ENDPOINT)
+
++ @pytest.mark.skip()
+ def test_GithubMetricModuleFactory_with_internal_import(self):
+ # "squad_v2" requires additional imports (internal)
+ factory = GithubMetricModuleFactory(
+@@ -419,6 +421,7 @@
+ module_factory_result = factory.get_module()
+ assert importlib.import_module(module_factory_result.module_path) is not None
+
++ @pytest.mark.skip()
+ @pytest.mark.filterwarnings("ignore:GithubMetricModuleFactory is deprecated:FutureWarning")
+ def test_GithubMetricModuleFactory_with_external_import(self):
+ # "bleu" requires additional imports (external from github)
+@@ -1032,6 +1035,7 @@
+ datasets.load_dataset_builder(SAMPLE_DATASET_TWO_CONFIG_IN_METADATA, "non-existing-config")
+
+
++@pytest.mark.skip()
+ @pytest.mark.parametrize("serializer", [pickle, dill])
+ def test_load_dataset_builder_with_metadata_configs_pickable(serializer):
+ builder = datasets.load_dataset_builder(SAMPLE_DATASET_SINGLE_CONFIG_IN_METADATA)
+@@ -1153,6 +1157,7 @@
+ assert len(builder.config.data_files["test"]) > 0
+
+
++@pytest.mark.skip()
+ def test_load_dataset_builder_fail():
+ with pytest.raises(DatasetNotFoundError):
+ datasets.load_dataset_builder("blabla")
+@@ -1168,6 +1173,7 @@
+ assert isinstance(next(iter(dataset["train"])), dict)
+
+
++@pytest.mark.skip()
+ def test_load_dataset_cached_local_script(dataset_loading_script_dir, data_dir, caplog):
+ dataset = load_dataset(dataset_loading_script_dir, data_dir=data_dir)
+ assert isinstance(dataset, DatasetDict)
--- a/tests/test_hf_gcp.py 2024-02-21 09:59:26.918397895 +0100
+++ b/tests/test_hf_gcp.py 2024-02-21 09:59:46.335100597 +0100
-@@ -21,7 +21,6 @@
- {"dataset": "wikipedia", "config_name": "20220301.frr"},
- {"dataset": "wikipedia", "config_name": "20220301.it"},
- {"dataset": "wikipedia", "config_name": "20220301.simple"},
-- {"dataset": "eli5", "config_name": "LFQA_reddit"},
- {"dataset": "wiki40b", "config_name": "en"},
- {"dataset": "wiki_dpr", "config_name": "psgs_w100.nq.compressed"},
- {"dataset": "wiki_dpr", "config_name": "psgs_w100.nq.no_index"},
+@@ -47,6 +47,7 @@
+ ]
+
+
++@pytest.mark.skip("network")
+ @parameterized.named_parameters(list_datasets_on_hf_gcp_parameters(with_config=True))
+ class TestDatasetOnHfGcp(TestCase):
+ dataset = None
--- a/tests/test_inspect.py 2024-02-21 10:03:32.315520016 +0100
+++ b/tests/test_inspect.py 2024-02-21 10:03:50.345553490 +0100
@@ -18,7 +18,7 @@
@@ -66,24 +87,47 @@
def test_inspect_dataset(path, tmp_path):
inspect_dataset(path, tmp_path)
script_name = Path(path).stem + ".py"
---- a/tests/packaged_modules/test_cache.py 2024-02-21 12:04:18.036866572 +0100
-+++ b/tests/packaged_modules/test_cache.py 2024-02-21 12:04:54.333558520 +0100
-@@ -44,18 +44,3 @@
- Cache(dataset_name=text_dir.name, hash="missing").download_and_prepare()
- with pytest.raises(ValueError):
- Cache(dataset_name=text_dir.name, config_name="missing", version="auto", hash="auto").download_and_prepare()
--
--
--@pytest.mark.integration
--def test_cache_multi_configs():
-- repo_id = SAMPLE_DATASET_TWO_CONFIG_IN_METADATA
-- dataset_name = repo_id.split("/")[-1]
-- config_name = "v1"
-- ds = load_dataset(repo_id, config_name)
-- cache = Cache(dataset_name=dataset_name, repo_id=repo_id, config_name=config_name, version="auto", hash="auto")
-- reloaded = cache.as_dataset()
-- assert list(ds) == list(reloaded)
-- assert len(ds["train"]) == len(reloaded["train"])
-- with pytest.raises(ValueError) as excinfo:
-- Cache(dataset_name=dataset_name, repo_id=repo_id, config_name="missing", version="auto", hash="auto")
-- assert config_name in str(excinfo.value)
+@@ -49,6 +49,7 @@
+ assert list(info.splits.keys()) == expected_splits
+
+
++@pytest.mark.skip(reason="require network")
+ def test_get_dataset_config_info_private(hf_token, hf_private_dataset_repo_txt_data):
+ info = get_dataset_config_info(hf_private_dataset_repo_txt_data, config_name="default", token=hf_token)
+ assert list(info.splits.keys()) == ["train"]
+--- a/tests/test_data_files.py 2024-02-21 20:22:57.536160356 +0100
++++ b/tests/test_data_files.py 2024-02-21 20:25:00.153052174 +0100
+@@ -378,6 +378,7 @@
+ assert len(hub_dataset_repo_patterns_results[pattern]) == 0
+
+
++@pytest.mark.skip(reason="network")
+ def test_DataFilesList_from_patterns_locally_with_extra_files(complex_data_dir, text_file):
+ data_files_list = DataFilesList.from_patterns([_TEST_URL, text_file.as_posix()], complex_data_dir)
+ assert list(data_files_list) == [_TEST_URL, text_file.as_posix()]
+@@ -467,6 +468,7 @@
+ assert Hasher.hash(data_files1) != Hasher.hash(data_files2)
+
+
++@pytest.mark.skip(reason="network")
+ def test_DataFilesDict_from_patterns_locally_or_remote_hashing(text_file):
+ patterns = {"train": [_TEST_URL], "test": [str(text_file)]}
+ data_files1 = DataFilesDict.from_patterns(patterns)
+--- a/tests/packaged_modules/test_folder_based_builder.py 2024-02-21 21:30:20.718922523 +0100
++++ b/tests/packaged_modules/test_folder_based_builder.py 2024-02-21 21:31:46.309061287 +0100
+@@ -382,6 +382,7 @@
+ assert example[column] is not None
+
+
++@pytest.mark.skip(reason="network")
+ @pytest.mark.parametrize("remote", [True, False])
+ @pytest.mark.parametrize("drop_labels", [None, True, False])
+ def test_data_files_with_different_levels_no_metadata(
+@@ -405,6 +406,7 @@
+ assert all(example.keys() == {"base", "label"} for _, example in generator)
+
+
++@pytest.mark.skip(reason="network")
+ @pytest.mark.parametrize("remote", [False, True])
+ @pytest.mark.parametrize("drop_labels", [None, True, False])
+ def test_data_files_with_one_label_no_metadata(data_files_with_one_label_no_metadata, drop_labels, remote, cache_dir):
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [gentoo-commits] repo/gentoo:master commit in: sci-libs/datasets/files/, sci-libs/datasets/
@ 2024-10-28 20:29 Alfredo Tupone
0 siblings, 0 replies; 5+ messages in thread
From: Alfredo Tupone @ 2024-10-28 20:29 UTC (permalink / raw
To: gentoo-commits
commit: e5bad0118b14c47e007bad172ecaf96a72fd9131
Author: Alfredo Tupone <tupone <AT> gentoo <DOT> org>
AuthorDate: Mon Oct 28 20:27:02 2024 +0000
Commit: Alfredo Tupone <tupone <AT> gentoo <DOT> org>
CommitDate: Mon Oct 28 20:28:58 2024 +0000
URL: https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=e5bad011
sci-libs/datasets: bump to 2.19.2, drop old
Closes: https://bugs.gentoo.org/933415
Signed-off-by: Alfredo Tupone <tupone <AT> gentoo.org>
sci-libs/datasets/Manifest | 2 +-
sci-libs/datasets/datasets-2.18.0-r1.ebuild | 86 -----
sci-libs/datasets/datasets-2.19.2.ebuild | 194 +++++++++++
.../datasets/files/datasets-2.17.1-tests.patch | 364 ---------------------
.../datasets/files/datasets-2.19.2-tests.patch | 23 ++
5 files changed, 218 insertions(+), 451 deletions(-)
diff --git a/sci-libs/datasets/Manifest b/sci-libs/datasets/Manifest
index 8499b8b1ec10..91af45b2a788 100644
--- a/sci-libs/datasets/Manifest
+++ b/sci-libs/datasets/Manifest
@@ -1 +1 @@
-DIST datasets-2.18.0.gh.tar.gz 2169179 BLAKE2B 8a0daa0e8995b3fa5480d8aa892a26d1b6ba90f252ef7c7ca62f4afc2efa61a8ed2efbf48a40381f07178b826bde62af0f0cb8cbf80d470d5d4dfb1ba25f6cb8 SHA512 b1fb0e6636417683fa79679286505b921a7ba00b1cabd6a23e60d5804eb098527c5283058799a0776a7f1e93972fdbb948882f153a10557bcc6b6b22ab861292
+DIST datasets-2.19.2.gh.tar.gz 2176600 BLAKE2B d02d43f7db0ce9a2220b332e5e2ab4de2648fa2b693dec703ae900b42d0089be1cf79270d4b8daeda841cccde6f60c93d6b2eee15bc652e4f60e08a6f3fade82 SHA512 7593463174b7308c45e1fd50190942e94ac63ff7bd0ff54a8a09496d041f69fa43eaa7e4e7372965deaafdb7843d8a86aeb0db1a75efe0c3da37fcf064521c16
diff --git a/sci-libs/datasets/datasets-2.18.0-r1.ebuild b/sci-libs/datasets/datasets-2.18.0-r1.ebuild
deleted file mode 100644
index d16c3e2459de..000000000000
--- a/sci-libs/datasets/datasets-2.18.0-r1.ebuild
+++ /dev/null
@@ -1,86 +0,0 @@
-# Copyright 2023-2024 Gentoo Authors
-# Distributed under the terms of the GNU General Public License v2
-
-EAPI=8
-
-DISTUTILS_USE_PEP517=setuptools
-PYTHON_COMPAT=( python3_{10..12} )
-DISTUTILS_SINGLE_IMPL=1
-inherit distutils-r1
-
-DESCRIPTION="Access and share datasets for Audio, Computer Vision, and NLP tasks"
-HOMEPAGE="
- https://pypi.org/project/datasets/
-"
-SRC_URI="https://github.com/huggingface/${PN}/archive/refs/tags/${PV}.tar.gz
- -> ${P}.gh.tar.gz"
-IUSE="test"
-
-LICENSE="Apache-2.0"
-SLOT="0"
-KEYWORDS="~amd64"
-
-RDEPEND="
- ${PYTHON_DEPS}
- sci-libs/pytorch[${PYTHON_SINGLE_USEDEP}]
- sci-libs/caffe2[${PYTHON_SINGLE_USEDEP},numpy]
- $(python_gen_cond_dep '
- dev-python/absl-py[${PYTHON_USEDEP}]
- dev-python/aiohttp[${PYTHON_USEDEP}]
- dev-python/dill[${PYTHON_USEDEP}]
- dev-python/filelock[${PYTHON_USEDEP}]
- dev-python/fsspec[${PYTHON_USEDEP}]
- dev-python/multiprocess[${PYTHON_USEDEP}]
- dev-python/numpy[${PYTHON_USEDEP}]
- dev-python/packaging[${PYTHON_USEDEP}]
- dev-python/pandas[${PYTHON_USEDEP}]
- dev-python/pyarrow[${PYTHON_USEDEP},parquet,snappy]
- dev-python/pyyaml[${PYTHON_USEDEP}]
- dev-python/requests[${PYTHON_USEDEP}]
- dev-python/scikit-learn[${PYTHON_USEDEP}]
- dev-python/tqdm[${PYTHON_USEDEP}]
- dev-python/xxhash[${PYTHON_USEDEP}]
- dev-python/zstandard[${PYTHON_USEDEP}]
- sci-libs/huggingface_hub[${PYTHON_USEDEP}]
- ')
-"
-DEPEND="${RDEPEND}"
-BDEPEND="test? (
- $(python_gen_cond_dep '
- dev-python/absl-py[${PYTHON_USEDEP}]
- dev-python/pytest-datadir[${PYTHON_USEDEP}]
- dev-python/decorator[${PYTHON_USEDEP}]
- dev-python/sqlalchemy[${PYTHON_USEDEP}]
- sci-libs/jiwer[${PYTHON_USEDEP}]
- sci-libs/seqeval[${PYTHON_USEDEP}]
- ')
-)"
-
-PATCHES=(
- "${FILESDIR}"/${PN}-2.17.1-tests.patch
-)
-
-distutils_enable_tests pytest
-
-src_prepare() {
- distutils-r1_src_prepare
- rm tests/packaged_modules/test_spark.py || die
- rm tests/test_upstream_hub.py || die
- sed -i -e \
- "/pyarrow_hotfix/d" \
- src/datasets/features/features.py || die
- sed -i \
- -e "s:pytest.mark.integration:pytest.mark.skip():g" \
- tests/test_arrow_dataset.py \
- tests/test_fingerprint.py \
- tests/test_hf_gcp.py \
- tests/test_inspect.py \
- tests/test_iterable_dataset.py \
- tests/test_iterable_dataset.py \
- tests/test_load.py \
- tests/test_offline_util.py \
- tests/test_streaming_download_manager.py \
- tests/commands/test_test.py \
- tests/packaged_modules/test_cache.py \
- || die
-}
diff --git a/sci-libs/datasets/datasets-2.19.2.ebuild b/sci-libs/datasets/datasets-2.19.2.ebuild
new file mode 100644
index 000000000000..73eed0dccf5f
--- /dev/null
+++ b/sci-libs/datasets/datasets-2.19.2.ebuild
@@ -0,0 +1,194 @@
+# Copyright 2023-2024 Gentoo Authors
+# Distributed under the terms of the GNU General Public License v2
+
+EAPI=8
+
+DISTUTILS_USE_PEP517=setuptools
+PYTHON_COMPAT=( python3_{10..12} )
+DISTUTILS_SINGLE_IMPL=1
+inherit distutils-r1
+
+DESCRIPTION="Access and share datasets for Audio, Computer Vision, and NLP tasks"
+HOMEPAGE="https://pypi.org/project/datasets/"
+SRC_URI="https://github.com/huggingface/${PN}/archive/refs/tags/${PV}.tar.gz
+ -> ${P}.gh.tar.gz"
+
+LICENSE="Apache-2.0"
+SLOT="0"
+KEYWORDS="~amd64"
+
+IUSE="test"
+
+RDEPEND="
+ ${PYTHON_DEPS}
+ sci-libs/pytorch[${PYTHON_SINGLE_USEDEP}]
+ sci-libs/caffe2[${PYTHON_SINGLE_USEDEP},numpy]
+ $(python_gen_cond_dep '
+ dev-python/absl-py[${PYTHON_USEDEP}]
+ dev-python/aiohttp[${PYTHON_USEDEP}]
+ dev-python/dill[${PYTHON_USEDEP}]
+ dev-python/filelock[${PYTHON_USEDEP}]
+ dev-python/fsspec[${PYTHON_USEDEP}]
+ dev-python/multiprocess[${PYTHON_USEDEP}]
+ dev-python/numpy[${PYTHON_USEDEP}]
+ dev-python/packaging[${PYTHON_USEDEP}]
+ dev-python/pandas[${PYTHON_USEDEP}]
+ dev-python/pyarrow[${PYTHON_USEDEP},parquet,snappy]
+ dev-python/pyyaml[${PYTHON_USEDEP}]
+ dev-python/requests[${PYTHON_USEDEP}]
+ dev-python/scikit-learn[${PYTHON_USEDEP}]
+ dev-python/tqdm[${PYTHON_USEDEP}]
+ dev-python/xxhash[${PYTHON_USEDEP}]
+ dev-python/zstandard[${PYTHON_USEDEP}]
+ sci-libs/huggingface_hub[${PYTHON_USEDEP}]
+ ')
+"
+DEPEND="${RDEPEND}"
+BDEPEND="test? (
+ $(python_gen_cond_dep '
+ dev-python/absl-py[${PYTHON_USEDEP}]
+ dev-python/pytest-datadir[${PYTHON_USEDEP}]
+ dev-python/decorator[${PYTHON_USEDEP}]
+ dev-python/sqlalchemy[${PYTHON_USEDEP}]
+ sci-libs/jiwer[${PYTHON_USEDEP}]
+ sci-libs/seqeval[${PYTHON_USEDEP}]
+ ')
+)"
+
+PATCHES=(
+ "${FILESDIR}"/${P}-tests.patch
+)
+
+distutils_enable_tests pytest
+
+src_prepare() {
+ distutils-r1_src_prepare
+ sed -i -e \
+ "/pyarrow_hotfix/d" \
+ src/datasets/features/features.py || die
+}
+
+src_test() {
+ local EPYTEST_IGNORE=(
+ tests/test_upstream_hub.py
+ tests/packaged_modules/test_spark.py
+ )
+
+ local EPYTEST_DESELECT=(
+ "tests/test_arrow_dataset.py::BaseDatasetTest::test_filter_caching_on_disk"
+ "tests/test_arrow_dataset.py::BaseDatasetTest::test_filter_in_memory"
+ "tests/test_arrow_dataset.py::BaseDatasetTest::test_filter_on_disk"
+ "tests/test_arrow_dataset.py::BaseDatasetTest::test_flatten_indices_in_memory"
+ "tests/test_arrow_dataset.py::BaseDatasetTest::test_flatten_indices_on_disk"
+ "tests/test_arrow_dataset.py::BaseDatasetTest::test_map_batched_in_memory"
+ "tests/test_arrow_dataset.py::BaseDatasetTest::test_map_batched_on_disk"
+ "tests/test_arrow_dataset.py::BaseDatasetTest::test_map_caching_on_disk"
+ "tests/test_arrow_dataset.py::BaseDatasetTest::test_map_in_memory"
+ "tests/test_arrow_dataset.py::BaseDatasetTest::test_map_on_disk"
+ "tests/test_arrow_dataset.py::BaseDatasetTest::test_map_remove_columns_in_memory"
+ "tests/test_arrow_dataset.py::BaseDatasetTest::test_map_remove_columns_on_disk"
+ "tests/test_arrow_dataset.py::BaseDatasetTest::test_select_in_memory"
+ "tests/test_arrow_dataset.py::BaseDatasetTest::test_select_on_disk"
+ "tests/test_arrow_dataset.py::BaseDatasetTest::test_set_format_numpy_multiple_columns_in_memory"
+ "tests/test_arrow_dataset.py::BaseDatasetTest::test_set_format_numpy_multiple_columns_on_disk"
+ "tests/test_arrow_dataset.py::BaseDatasetTest::test_set_format_torch_in_memory"
+ "tests/test_arrow_dataset.py::BaseDatasetTest::test_set_format_torch_on_disk"
+ "tests/test_arrow_dataset.py::BaseDatasetTest::test_train_test_split_in_memory"
+ "tests/test_arrow_dataset.py::BaseDatasetTest::test_train_test_split_on_disk"
+ "tests/test_arrow_dataset.py::TaskTemplatesTest::test_task_automatic_speech_recognition"
+ "tests/test_arrow_dataset.py::StratifiedTest::test_train_test_split_startify"
+ "tests/test_arrow_dataset.py::test_dataset_format_with_unformatted_image"
+ "tests/test_arrow_dataset.py::test_map_cases"
+ "tests/test_dataset_dict.py::DatasetDictTest::test_set_format_numpy"
+ "tests/test_dataset_dict.py::DatasetDictTest::test_set_format_torch"
+ "tests/test_distributed.py::test_torch_distributed_run"
+ "tests/test_distributed.py::test_torch_distributed_run_streaming_with_num_workers"
+ "tests/test_file_utils.py::TestxPath::test_xpath_glob"
+ "tests/test_file_utils.py::TestxPath::test_xpath_rglob"
+ "tests/test_fingerprint.py::TokenizersHashTest::test_hash_regex"
+ "tests/test_fingerprint.py::TokenizersHashTest::test_hash_tokenizer"
+ "tests/test_fingerprint.py::TokenizersHashTest::test_hash_tokenizer_with_cache"
+ "tests/test_fingerprint.py::RecurseHashTest::test_hash_ignores_line_definition_of_function"
+ "tests/test_fingerprint.py::RecurseHashTest::test_hash_ipython_function"
+ "tests/test_fingerprint.py::HashingTest::test_hash_torch_compiled_module"
+ "tests/test_fingerprint.py::HashingTest::test_hash_torch_generator"
+ "tests/test_fingerprint.py::HashingTest::test_hash_torch_tensor"
+ "tests/test_fingerprint.py::HashingTest::test_set_doesnt_depend_on_order"
+ "tests/test_fingerprint.py::HashingTest::test_set_stable"
+ "tests/test_fingerprint.py::test_move_script_doesnt_change_hash"
+ "tests/test_formatting.py::ArrowExtractorTest::test_numpy_extractor"
+ "tests/test_formatting.py::ArrowExtractorTest::test_numpy_extractor_nested"
+ "tests/test_formatting.py::ArrowExtractorTest::test_numpy_extractor_temporal"
+ "tests/test_formatting.py::FormatterTest::test_numpy_formatter"
+ "tests/test_formatting.py::FormatterTest::test_numpy_formatter_image"
+ "tests/test_formatting.py::FormatterTest::test_numpy_formatter_np_array_kwargs"
+ "tests/test_formatting.py::FormatterTest::test_torch_formatter"
+ "tests/test_formatting.py::FormatterTest::test_torch_formatter_image"
+ "tests/test_formatting.py::FormatterTest::test_torch_formatter_torch_tensor_kwargs"
+ "tests/test_formatting.py::test_torch_formatter_sets_default_dtypes"
+ "tests/test_inspect.py::test_get_dataset_config_names[hf-internal-testing/librispeech_asr_dummy-expected4]"
+ "tests/test_inspect.py::test_get_dataset_default_config_name[hf-internal-testing/librispeech_asr_dummy-None]"
+ "tests/test_load.py::ModuleFactoryTest::test_HubDatasetModuleFactoryWithParquetExport"
+ "tests/test_load.py::ModuleFactoryTest::test_HubDatasetModuleFactoryWithParquetExport_errors_on_wrong_sha"
+ "tests/test_metric_common.py::LocalMetricTest::test_load_metric_bertscore"
+ "tests/test_metric_common.py::LocalMetricTest::test_load_metric_bleurt"
+ "tests/test_metric_common.py::LocalMetricTest::test_load_metric_chrf"
+ "tests/test_metric_common.py::LocalMetricTest::test_load_metric_code_eval"
+ "tests/test_metric_common.py::LocalMetricTest::test_load_metric_competition_math"
+ "tests/test_metric_common.py::LocalMetricTest::test_load_metric_coval"
+ "tests/test_metric_common.py::LocalMetricTest::test_load_metric_cuad"
+ "tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore"
+ "tests/test_metric_common.py::LocalMetricTest::test_load_metric_glue"
+ "tests/test_metric_common.py::LocalMetricTest::test_load_metric_google_bleu"
+ "tests/test_metric_common.py::LocalMetricTest::test_load_metric_indic_glue"
+ "tests/test_metric_common.py::LocalMetricTest::test_load_metric_mae"
+ "tests/test_metric_common.py::LocalMetricTest::test_load_metric_mauve"
+ "tests/test_metric_common.py::LocalMetricTest::test_load_metric_mean_iou"
+ "tests/test_metric_common.py::LocalMetricTest::test_load_metric_meteor"
+ "tests/test_metric_common.py::LocalMetricTest::test_load_metric_mse"
+ "tests/test_metric_common.py::LocalMetricTest::test_load_metric_precision"
+ "tests/test_metric_common.py::LocalMetricTest::test_load_metric_roc_auc"
+ "tests/test_metric_common.py::LocalMetricTest::test_load_metric_rouge"
+ "tests/test_metric_common.py::LocalMetricTest::test_load_metric_sacrebleu"
+ "tests/test_metric_common.py::LocalMetricTest::test_load_metric_sari"
+ "tests/test_metric_common.py::LocalMetricTest::test_load_metric_spearmanr"
+ "tests/test_metric_common.py::LocalMetricTest::test_load_metric_super_glue"
+ "tests/test_metric_common.py::LocalMetricTest::test_load_metric_ter"
+ "tests/test_metric_common.py::LocalMetricTest::test_load_metric_wiki_split"
+ "tests/test_metric_common.py::LocalMetricTest::test_load_metric_xnli"
+ "tests/test_metric_common.py::LocalMetricTest::test_load_metric_xtreme_s"
+ "tests/features/test_array_xd.py::ExtensionTypeCompatibilityTest::test_array2d_nonspecific_shape"
+ "tests/features/test_array_xd.py::ExtensionTypeCompatibilityTest::test_extension_indexing"
+ "tests/features/test_array_xd.py::ExtensionTypeCompatibilityTest::test_multiple_extensions_same_row"
+ "tests/features/test_array_xd.py::ArrayXDTest::test_from_dict_2d"
+ "tests/features/test_array_xd.py::ArrayXDTest::test_from_dict_3d"
+ "tests/features/test_array_xd.py::ArrayXDTest::test_from_dict_4d"
+ "tests/features/test_array_xd.py::ArrayXDTest::test_from_dict_5d"
+ "tests/features/test_array_xd.py::ArrayXDTest::test_write_2d"
+ "tests/features/test_array_xd.py::ArrayXDTest::test_write_3d"
+ "tests/features/test_array_xd.py::ArrayXDTest::test_write_4d"
+ "tests/features/test_array_xd.py::ArrayXDTest::test_write_5d"
+ "tests/features/test_array_xd.py::ArrayXDTest::test_write_batch_2d"
+ "tests/features/test_array_xd.py::ArrayXDTest::test_write_batch_3d"
+ "tests/features/test_array_xd.py::ArrayXDTest::test_write_batch_4d"
+ "tests/features/test_array_xd.py::ArrayXDTest::test_write_batch_5d"
+ "tests/features/test_array_xd.py::test_array_xd_numpy_arrow_extractor"
+ "tests/features/test_array_xd.py::test_array_xd_with_none"
+ "tests/features/test_array_xd.py::test_dataset_map"
+ "tests/features/test_audio.py::test_audio_feature_encode_example"
+ "tests/features/test_audio.py::test_audio_feature_encode_example_pcm"
+ "tests/features/test_audio.py::test_audio_decode_example_pcm"
+ "tests/features/test_audio.py::test_dataset_cast_to_audio_features"
+ "tests/features/test_audio.py::test_dataset_concatenate_audio_features"
+ "tests/features/test_audio.py::test_dataset_concatenate_nested_audio_features"
+ "tests/features/test_audio.py::test_dataset_with_audio_feature_undecoded"
+ "tests/features/test_audio.py::test_formatted_dataset_with_audio_feature_undecoded"
+ "tests/features/test_audio.py::test_dataset_with_audio_feature_map_undecoded"
+ "tests/features/test_image.py::test_formatted_dataset_with_image_feature_map"
+ "tests/features/test_image.py::test_formatted_dataset_with_image_feature"
+ "tests/features/test_image.py::test_formatted_dataset_with_image_feature_undecoded"
+ "tests/packaged_modules/test_cache.py::test_cache_multi_configs"
+ "tests/packaged_modules/test_cache.py::test_cache_single_config"
+ )
+ distutils-r1_src_test
+}
diff --git a/sci-libs/datasets/files/datasets-2.17.1-tests.patch b/sci-libs/datasets/files/datasets-2.17.1-tests.patch
deleted file mode 100644
index 2281598dfb38..000000000000
--- a/sci-libs/datasets/files/datasets-2.17.1-tests.patch
+++ /dev/null
@@ -1,364 +0,0 @@
---- a/tests/test_arrow_dataset.py 2024-02-20 21:53:24.248470991 +0100
-+++ b/tests/test_arrow_dataset.py 2024-02-20 21:53:29.441804737 +0100
-@@ -4016,7 +4016,6 @@
- [
- "relative/path",
- "/absolute/path",
-- "s3://bucket/relative/path",
- "hdfs://relative/path",
- "hdfs:///absolute/path",
- ],
-@@ -4136,6 +4136,7 @@
- )
- self.assertDictEqual(features_after_cast, dset.features)
-
-+ @pytest.mark.skip(reason="require soundfile")
- def test_task_automatic_speech_recognition(self):
- # Include a dummy extra column `dummy` to test we drop it correctly
- features_before_cast = Features(
---- a/tests/test_load.py 2024-02-20 22:12:13.699209107 +0100
-+++ b/tests/test_load.py 2024-02-20 22:13:10.862626708 +0100
-@@ -388,6 +388,7 @@
- hf_modules_cache=self.hf_modules_cache,
- )
-
-+ @pytest.mark.skip(reason="")
- def test_HubDatasetModuleFactoryWithScript_dont_trust_remote_code(self):
- # "lhoestq/test" has a dataset script
- factory = HubDatasetModuleFactoryWithScript(
-@@ -403,6 +404,7 @@
- )
- self.assertRaises(ValueError, factory.get_module)
-
-+ @pytest.mark.skip()
- def test_HubDatasetModuleFactoryWithScript_with_github_dataset(self):
- # "wmt_t2t" has additional imports (internal)
- factory = HubDatasetModuleFactoryWithScript(
-@@ -412,6 +414,7 @@
- assert importlib.import_module(module_factory_result.module_path) is not None
- assert module_factory_result.builder_kwargs["base_path"].startswith(config.HF_ENDPOINT)
-
-+ @pytest.mark.skip()
- def test_GithubMetricModuleFactory_with_internal_import(self):
- # "squad_v2" requires additional imports (internal)
- factory = GithubMetricModuleFactory(
-@@ -420,6 +423,7 @@
- module_factory_result = factory.get_module()
- assert importlib.import_module(module_factory_result.module_path) is not None
-
-+ @pytest.mark.skip()
- @pytest.mark.filterwarnings("ignore:GithubMetricModuleFactory is deprecated:FutureWarning")
- def test_GithubMetricModuleFactory_with_external_import(self):
- # "bleu" requires additional imports (external from github)
-@@ -1033,6 +1037,7 @@
- datasets.load_dataset_builder(SAMPLE_DATASET_TWO_CONFIG_IN_METADATA, "non-existing-config")
-
-
-+@pytest.mark.skip()
- @pytest.mark.parametrize("serializer", [pickle, dill])
- def test_load_dataset_builder_with_metadata_configs_pickable(serializer):
- builder = datasets.load_dataset_builder(SAMPLE_DATASET_SINGLE_CONFIG_IN_METADATA)
-@@ -1154,6 +1159,7 @@
- assert len(builder.config.data_files["test"]) > 0
-
-
-+@pytest.mark.skip()
- def test_load_dataset_builder_fail():
- with pytest.raises(DatasetNotFoundError):
- datasets.load_dataset_builder("blabla")
-@@ -1169,6 +1175,7 @@
- assert isinstance(next(iter(dataset["train"])), dict)
-
-
-+@pytest.mark.skip()
- def test_load_dataset_cached_local_script(dataset_loading_script_dir, data_dir, caplog):
- dataset = load_dataset(dataset_loading_script_dir, data_dir=data_dir)
- assert isinstance(dataset, DatasetDict)
---- a/tests/test_hf_gcp.py 2024-02-21 09:59:26.918397895 +0100
-+++ b/tests/test_hf_gcp.py 2024-02-21 09:59:46.335100597 +0100
-@@ -45,6 +45,7 @@
- ]
-
-
-+@pytest.mark.skip("network")
- @parameterized.named_parameters(list_datasets_on_hf_gcp_parameters(with_config=True))
- class TestDatasetOnHfGcp(TestCase):
- dataset = None
---- a/tests/test_inspect.py 2024-02-21 10:03:32.315520016 +0100
-+++ b/tests/test_inspect.py 2024-02-21 10:03:50.345553490 +0100
-@@ -49,6 +49,7 @@
- assert list(info.splits.keys()) == expected_splits
-
-
-+@pytest.mark.skip(reason="require network")
- def test_get_dataset_config_info_private(hf_token, hf_private_dataset_repo_txt_data):
- info = get_dataset_config_info(hf_private_dataset_repo_txt_data, config_name="default", token=hf_token)
- assert list(info.splits.keys()) == ["train"]
---- a/tests/test_data_files.py 2024-02-21 20:22:57.536160356 +0100
-+++ b/tests/test_data_files.py 2024-02-21 20:25:00.153052174 +0100
-@@ -378,6 +378,7 @@
- assert len(hub_dataset_repo_patterns_results[pattern]) == 0
-
-
-+@pytest.mark.skip(reason="network")
- def test_DataFilesList_from_patterns_locally_with_extra_files(complex_data_dir, text_file):
- data_files_list = DataFilesList.from_patterns([_TEST_URL, text_file.as_posix()], complex_data_dir)
- assert list(data_files_list) == [_TEST_URL, text_file.as_posix()]
-@@ -467,6 +468,7 @@
- assert Hasher.hash(data_files1) != Hasher.hash(data_files2)
-
-
-+@pytest.mark.skip(reason="network")
- def test_DataFilesDict_from_patterns_locally_or_remote_hashing(text_file):
- patterns = {"train": [_TEST_URL], "test": [str(text_file)]}
- data_files1 = DataFilesDict.from_patterns(patterns)
---- a/tests/packaged_modules/test_folder_based_builder.py 2024-02-21 21:30:20.718922523 +0100
-+++ b/tests/packaged_modules/test_folder_based_builder.py 2024-02-21 21:31:46.309061287 +0100
-@@ -382,6 +382,7 @@
- assert example[column] is not None
-
-
-+@pytest.mark.skip(reason="network")
- @pytest.mark.parametrize("remote", [True, False])
- @pytest.mark.parametrize("drop_labels", [None, True, False])
- def test_data_files_with_different_levels_no_metadata(
-@@ -405,6 +406,7 @@
- assert all(example.keys() == {"base", "label"} for _, example in generator)
-
-
-+@pytest.mark.skip(reason="network")
- @pytest.mark.parametrize("remote", [False, True])
- @pytest.mark.parametrize("drop_labels", [None, True, False])
- def test_data_files_with_one_label_no_metadata(data_files_with_one_label_no_metadata, drop_labels, remote, cache_dir):
---- a/tests/test_metric_common.py 2023-05-04 18:48:48.550861318 +0200
-+++ b/tests/test_metric_common.py 2023-05-04 18:50:25.787364577 +0200
-@@ -93,6 +93,7 @@
- INTENSIVE_CALLS_PATCHER = {}
- metric_name = None
-
-+ @pytest.mark.skip(reason="disabling, depends on bert_score, bleurt, math_equivalence, coval, nltk, faiss, mauve, rouge_score, sacrebleu, sacremoses ...")
- @pytest.mark.filterwarnings("ignore:metric_module_factory is deprecated:FutureWarning")
- @pytest.mark.filterwarnings("ignore:load_metric is deprecated:FutureWarning")
- def test_load_metric(self, metric_name):
---- a/tests/test_distributed.py 2023-05-04 19:43:09.861275030 +0200
-+++ b/tests/test_distributed.py 2023-05-04 19:44:17.608326722 +0200
-@@ -74,6 +74,7 @@
- split_dataset_by_node(full_ds.shuffle(), rank=0, world_size=world_size)
-
-
-+@pytest.mark.skip(reason="require distributed torch")
- @pytest.mark.parametrize("streaming", [False, True])
- @require_torch
- @pytest.mark.skipif(os.name == "nt", reason="execute_subprocess_async doesn't support windows")
-@@ -95,6 +96,7 @@
- execute_subprocess_async(cmd, env=os.environ.copy())
-
-
-+@pytest.mark.skip(reason="require distributed torch")
- @pytest.mark.parametrize(
- "nproc_per_node, num_workers",
- [
---- a/tests/utils.py 2023-05-06 08:43:16.251987543 +0200
-+++ b/tests/utils.py 2023-05-06 08:44:24.467952870 +0200
-@@ -50,8 +50,8 @@
- # Audio
- require_sndfile = pytest.mark.skipif(
- # On Windows and OS X, soundfile installs sndfile
-- find_spec("soundfile") is None or version.parse(importlib.metadata.version("soundfile")) < version.parse("0.12.0"),
-- reason="test requires sndfile>=0.12.1: 'pip install \"soundfile>=0.12.1\"'; ",
-+ True,
-+ reason="test requires librosa",
- )
-
- # Beam
---- a/tests/features/test_audio.py 2023-05-06 09:03:58.680108142 +0200
-+++ a/tests/features/test_audio.py 2023-05-06 09:05:50.463407967 +0200
-@@ -57,6 +57,7 @@
- assert features.arrow_schema == pa.schema({"sequence_of_audios": pa.list_(Audio().pa_type)})
-
-
-+@pytest.mark.skip(reason="require librosa")
- @pytest.mark.parametrize(
- "build_example",
- [
-@@ -81,6 +82,7 @@
- assert decoded_example.keys() == {"path", "array", "sampling_rate"}
-
-
-+@pytest.mark.skip(reason="require librosa")
- @pytest.mark.parametrize(
- "build_example",
- [
-@@ -148,6 +149,7 @@
- assert decoded_example["sampling_rate"] == 48000
-
-
-+@pytest.mark.skip(reason="require librosa")
- @pytest.mark.parametrize("sampling_rate", [16_000, 48_000])
- def test_audio_decode_example_pcm(shared_datadir, sampling_rate):
- audio_path = str(shared_datadir / "test_audio_16000.pcm")
-@@ -414,6 +417,7 @@
- assert column[0]["sampling_rate"] == 16000
-
-
-+@pytest.mark.skip(reason="require librosa")
- @pytest.mark.parametrize(
- "build_data",
- [
-@@ -438,6 +442,7 @@
- assert item["audio"].keys() == {"path", "array", "sampling_rate"}
-
-
-+@pytest.mark.skip(reason="require librosa")
- def test_dataset_concatenate_audio_features(shared_datadir):
- # we use a different data structure between 1 and 2 to make sure they are compatible with each other
- audio_path = str(shared_datadir / "test_audio_44100.wav")
-@@ -451,6 +456,7 @@
- assert concatenated_dataset[1]["audio"]["array"].shape == dset2[0]["audio"]["array"].shape
-
-
-+@pytest.mark.skip(reason="require librosa")
- def test_dataset_concatenate_nested_audio_features(shared_datadir):
- # we use a different data structure between 1 and 2 to make sure they are compatible with each other
- audio_path = str(shared_datadir / "test_audio_44100.wav")
-@@ -610,6 +616,7 @@
- assert isinstance(ds, Dataset)
-
-
-+@require_sndfile
- def test_dataset_with_audio_feature_undecoded(shared_datadir):
- audio_path = str(shared_datadir / "test_audio_44100.wav")
- data = {"audio": [audio_path]}
-@@ -627,6 +634,7 @@
- assert column[0] == {"path": audio_path, "bytes": None}
-
-
-+@require_sndfile
- def test_formatted_dataset_with_audio_feature_undecoded(shared_datadir):
- audio_path = str(shared_datadir / "test_audio_44100.wav")
- data = {"audio": [audio_path]}
-@@ -658,6 +666,7 @@
- assert column[0] == {"path": audio_path, "bytes": None}
-
-
-+@require_sndfile
- def test_dataset_with_audio_feature_map_undecoded(shared_datadir):
- audio_path = str(shared_datadir / "test_audio_44100.wav")
- data = {"audio": [audio_path]}
---- a/tests/packaged_modules/test_audiofolder.py 2023-05-06 14:00:39.560876163 +0200
-+++ b/tests/packaged_modules/test_audiofolder.py 2023-05-06 14:01:26.005212423 +0200
-@@ -1,10 +1,8 @@
- import shutil
- import textwrap
-
--import librosa
- import numpy as np
- import pytest
--import soundfile as sf
-
- from datasets import Audio, ClassLabel, Features, Value
- from datasets.data_files import DataFilesDict, get_data_patterns
-@@ -192,8 +190,11 @@
- return data_files_with_two_splits_and_metadata
-
-
-+@pytest.mark.skip(reason="require soundfile")
- @pytest.fixture
- def data_files_with_zip_archives(tmp_path, audio_file):
-+ import soundfile as sf
-+ import librosa
- data_dir = tmp_path / "audiofolder_data_dir_with_zip_archives"
- data_dir.mkdir(parents=True, exist_ok=True)
- archive_dir = data_dir / "archive"
---- a/tests/test_streaming_download_manager.py 2023-08-26 07:33:41.937389401 +0200
-+++ b/tests/test_streaming_download_manager.py 2023-08-26 07:37:22.521218698 +0200
-@@ -218,6 +218,7 @@
- assert output_path == _readd_double_slash_removed_by_path(Path(expected_path).as_posix())
-
-
-+@pytest.mark.skip(reason="not working in sandbox")
- @pytest.mark.parametrize(
- "input_path, exists",
- [
-@@ -301,6 +302,7 @@
- assert list(f) == TEST_URL_CONTENT.splitlines(keepends=True)
-
-
-+@pytest.mark.skip(reason="not working in sandbox")
- @pytest.mark.parametrize(
- "input_path, expected_paths",
- [
-@@ -331,6 +333,7 @@
- xlistdir(root_url, download_config=download_config)
-
-
-+@pytest.mark.skip(reason="not working in sandbox")
- @pytest.mark.parametrize(
- "input_path, isdir",
- [
-@@ -358,6 +361,7 @@
- assert xisdir(root_url, download_config=download_config) is False
-
-
-+@pytest.mark.skip(reason="not working in sandbox")
- @pytest.mark.parametrize(
- "input_path, isfile",
- [
-@@ -382,6 +386,7 @@
- assert xisfile(root_url + "qwertyuiop", download_config=download_config) is False
-
-
-+@pytest.mark.skip(reason="not working in sandbox")
- @pytest.mark.parametrize(
- "input_path, size",
- [
-@@ -407,6 +412,7 @@
- xgetsize(root_url + "qwertyuiop", download_config=download_config)
-
-
-+@pytest.mark.skip(reason="not working in sandbox")
- @pytest.mark.parametrize(
- "input_path, expected_paths",
- [
-@@ -450,6 +456,7 @@
- assert len(xglob("zip://qwertyuiop/*::" + root_url, download_config=download_config)) == 0
-
-
-+@pytest.mark.skip(reason="not working in sandbox")
- @pytest.mark.parametrize(
- "input_path, expected_outputs",
- [
-@@ -540,6 +547,7 @@
- def test_xpath_as_posix(self, input_path, expected_path):
- assert xPath(input_path).as_posix() == expected_path
-
-+ @pytest.mark.skip(reason="not working in sandbox")
- @pytest.mark.parametrize(
- "input_path, exists",
- [
-@@ -555,6 +563,7 @@
- (tmp_path / "file.txt").touch()
- assert xexists(input_path) is exists
-
-+ @pytest.mark.skip(reason="not working in sandbox")
- @pytest.mark.parametrize(
- "input_path, pattern, expected_paths",
- [
-@@ -593,6 +602,7 @@
- output_paths = sorted(xPath(input_path).glob(pattern))
- assert output_paths == expected_paths
-
-+ @pytest.mark.skip(reason="not working in sandbox")
- @pytest.mark.parametrize(
- "input_path, pattern, expected_paths",
- [
---- a/tests/io/test_parquet.py 2024-02-22 19:19:53.890749240 +0100
-+++ b/tests/io/test_parquet.py 2024-02-22 19:20:30.954099914 +0100
-@@ -69,6 +69,7 @@
- _check_parquet_dataset(dataset, expected_features)
-
-
-+@pytest.mark.skip()
- def test_parquet_read_geoparquet(geoparquet_path, tmp_path):
- cache_dir = tmp_path / "cache"
- dataset = ParquetDatasetReader(path_or_paths=geoparquet_path, cache_dir=cache_dir).read()
diff --git a/sci-libs/datasets/files/datasets-2.19.2-tests.patch b/sci-libs/datasets/files/datasets-2.19.2-tests.patch
new file mode 100644
index 000000000000..64df833032c5
--- /dev/null
+++ b/sci-libs/datasets/files/datasets-2.19.2-tests.patch
@@ -0,0 +1,23 @@
+--- a/tests/test_arrow_dataset.py 2024-02-20 21:53:24.248470991 +0100
++++ b/tests/test_arrow_dataset.py 2024-02-20 21:53:29.441804737 +0100
+@@ -4109,7 +4109,6 @@
+ [
+ "relative/path",
+ "/absolute/path",
+- "s3://bucket/relative/path",
+ "hdfs://relative/path",
+ "hdfs:///absolute/path",
+ ],
+--- a/tests/packaged_modules/test_audiofolder.py 2023-05-06 14:00:39.560876163 +0200
++++ b/tests/packaged_modules/test_audiofolder.py 2023-05-06 14:01:26.005212423 +0200
+@@ -1,10 +1,8 @@
+ import shutil
+ import textwrap
+
+-import librosa
+ import numpy as np
+ import pytest
+-import soundfile as sf
+
+ from datasets import Audio, ClassLabel, Features, Value
+ from datasets.data_files import DataFilesDict, get_data_patterns
^ permalink raw reply related [flat|nested] 5+ messages in thread
end of thread, other threads:[~2024-10-28 20:29 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-02-21 21:07 [gentoo-commits] repo/gentoo:master commit in: sci-libs/datasets/files/, sci-libs/datasets/ Alfredo Tupone
-- strict thread matches above, loose matches on Subject: below --
2024-10-28 20:29 Alfredo Tupone
2023-12-25 9:40 Alfredo Tupone
2023-08-26 6:28 Alfredo Tupone
2023-05-16 5:21 Alfredo Tupone
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox