pynonpar

Package providing functions to calculate pseudo-ranks and (pseudo)-rank based nonparametric test statistics.

https://github.com/happma/pynonpar

Science Score: 36.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: zenodo.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.8%) to scientific vocabulary

Keywords

nonparametric-statistic nonparametric-tests optimal-design pseudo-ranks python python3 ranks sample-size-calculation statistics wilcoxon-mann-whitney-test
Last synced: 6 months ago · JSON representation

Repository

Package providing functions to calculate pseudo-ranks and (pseudo)-rank based nonparametric test statistics.

Basic Info
  • Host: GitHub
  • Owner: happma
  • License: gpl-3.0
  • Language: HTML
  • Default Branch: master
  • Homepage:
  • Size: 1.63 MB
Statistics
  • Stars: 5
  • Watchers: 3
  • Forks: 0
  • Open Issues: 0
  • Releases: 2
Topics
nonparametric-statistic nonparametric-tests optimal-design pseudo-ranks python python3 ranks sample-size-calculation statistics wilcoxon-mann-whitney-test
Created over 7 years ago · Last pushed 12 months ago
Metadata Files
Readme License

README.md

PyNonpar

PyPI version codecov Downloads DOI

Test statistics based on ranks may lead to paradoxical results. A solution are so-called pseudo-ranks. This package provides a function to calculate pseudo-ranks as well as nonparametric, (pseudo)-rank statistics. For a definition and discussion of pseudo-ranks, see for example [1].

To install the package from PyPI, simply type pip install PyNonpar

Table of Contents

Two-Sample Tests
Paired Two-Sample Tests
Multi-Sample Tests
Repeated-Measures Tests

Pseudo-Ranks

If there are ties (i.e., observations with the same value) in the data, then the pseudo-ranks have to be adjusted. There are the options 'minimum', 'maximum' and 'average'. It is recommended to use 'average' as for this adjusmtent, normalized empirical distribution functions are used. See the example for details on the usage of the function 'psrank'.

```Python import PyNonpar from PyNonpar import*

some artificial data

x = [1, 1, 1, 1, 2, 3, 4, 5, 6] group = ['C', 'C', 'B', 'B', 'B', 'A', 'C', 'A', 'C']

PyNonpar.pseudorank.psrank(x, group, ties_method = "average") ```

Nonparametric Test Statistics

Two-Sample Tests

  1. Wilcoxon-Mann-Whitney test: wilcoxonmannwhitney_test()
  2. Brunner-Munzel test (Generalized Wilcoxon test): brunnermunzeltest()

The Hodges-Lehmann estimator can be calculated in a location shift model: hodges_lehmann(). The confidence interval for this estimator is only asymptotic and assumes continuous distributions.

1. Wilcoxon-Mann-Whitney test

For large sample sizes is the asymptotic Wilcoxon test recommended (method = "asymptotic"). For small sample sizes, we recommend the exact Wilcoxon test. Note that the Wilcoxon test assumes the null hypothesis of equal distributions H0: F1 = F2.

```Python import PyNonpar from PyNonpar import*

x = [8,4,10,4,9,1,3,3,4,8] y = [10,5,11,6,11,2,4,5,5,10]

PyNonpar.twosample.wilcoxonmannwhitneytest(x, y, alternative="less", method = "asymptotic", alpha = 0.05) PyNonpar.twosample.wilcoxonmannwhitneytest(x, y, alternative="less", method = "exact", alpha = 0.05)

```

Wilcoxon-Mann-Whitney Sample Size Planning

To calculate the sample size which is needed to detect a specific relative effect p with probability beta and type-I error alpha, the function'wilcoxonmannwhitney_ssp' can be used. Here, prior information for one group is needed. The artificial data for the second group can be created by some interpretable effect, e.g. a location shift effect. For more information, see [1] or [3].

```Python import PyNonpar from PyNonpar import*

pior information

x_ssp = [315, 375, 356, 374, 412, 418, 445, 403, 431, 410, 391, 475, 379]

yssp = xssp - 20

y_ssp = [295, 355, 336, 354, 392, 398, 425, 383, 411, 390, 371, 455, 359]

PyNonpar.twosamplepaired.pairedranksssp(xssp, y_ssp, 0.8, 0.05, 1/2) ```

2. Brunner-Munzel test

The Brunner-Munzel test extends the Wilcoxon test to the null hypothesis H0: p = 1/2.

```Python import PyNonpar from PyNonpar import*

x = [8,4,10,4,9,1,3,3,4,8] y = [10,5,11,6,11,2,4,5,5,10]

PyNonpar.twosample.brunnermunzeltest(x, y, alternative="less", quantile = "t") PyNonpar.twosample.brunnermunzeltest(x, y, alternative="less", quantile = "normal")

```

Paired Two-Sample Tests

1. Paired ranks test

The paired ranks test compares the marginal distributions F1 and F2. The Null hypothesis is H0: F1 = F2 (varequal = True) or H0: p = 1/2 (varequal = False). The two sided alternative is for both cases p != 1/2.

p = Probability(Xi < Yj) + 1/2 * Probability(Xi = Yj) for i != j where (Xi, Yi), (Xj, Yj) are paired observations.

```Python import PyNonpar from PyNonpar import*

x = [1, 2, 3, 4, 5, 7, 1, 1, 1] y = [4, 6, 8, 7, 6, 5, 9, 1, 1]

PyNonpar.twosamplepaired.pairedrankstest(x, y, alternative="two.sided", varequal=False, quantile="normal")

```

Multi-Sample Tests

  1. The Hettmansperger-Norton Test for Patterned Alternatives: hettmanspergernortontest()
  2. Kruskal-Wallis test: kruskalwallistest()

1. The Hettmansperger-Norton Test for Patterned Alternatives

This package provides a function to calculate the Hettmansperger-Norton test for patterned alternatives using pseudo-ranks. Originally, this test was developed for ranks but this version was adapted to pseudo-ranks.

For the alternative, it is possible to use 'increasing' (i.e., trend = [1, 2, 3, ..., g]), 'decreasing' (i.e., trend = [g, g-1, g-2, ..., 1]) or 'custom' where the trend has to be specified manually. Note, that the trend is a list of length g where g is the number of groups.

```Python import PyNonpar from PyNonpar import*

some artificial data

x = [1, 1, 1, 1, 2, 3, 4, 5, 6] group = ['C', 'C', 'B', 'B', 'B', 'A', 'C', 'A', 'C']

PyNonpar.hettmansperger.hettmanspergernortontest(x, group, alternative = "custom", trend = [1,3,2]) ```

2. Kruskal-Wallis Test

```Python import PyNonpar from PyNonpar import*

some artificial data

x = [1, 1, 1, 1, 2, 3, 4, 5, 6] group = ['C', 'C', 'B', 'B', 'B', 'A', 'C', 'A', 'C']

Using pseudo-ranks

PyNonpar.multisample.kruskalwallistest(x, group, pseudoranks = True)

Using ranks

PyNonpar.multisample.kruskalwallistest(x, group, pseudoranks = False) ```

Repeated-Measures Tests

  1. The Paired-Ranks Test: pairedrankstest()
  2. The Kepner-Robinson Test Test: kepnerrobinsontest()

1. Paired ranks test

See Section ''Paired Twosample Tests''.

2. Kepner-Robinson Test

For the Kepner-Robinson Test we have several dependent observations per subject (subplot factor). Let us denote with Fk the cdf for the k-th observation. The null hypothesis for this test is H0: F1 = ... Fd where d is the number of observations per subject. This test assumes for the dependence structure a compound symmetry, that is, all variances are the same and all covariances are the same. In other words, the observations on one subject can basically be interchanged. For more information, we refer to [2].

```Python import PyNonpar from PyNonpar import*

some artificial data

data = [1, 0, -2, -1, -2, 1, 0, 0, 0, -2] time = [1, 2, 1, 2, 1, 2, 1, 2, 1, 2] subject = [1, 1, 2, 2, 3, 3, 4, 4, 5, 5]

PyNonpar.repeatedmeasures.kepnerrobinson_test(data, time, subject, distribution="F") ```

References

[1] Brunner, E., Bathke A. C. and Konietschke, F: Rank- and Pseudo-Rank Procedures in Factorial Designs - Using R and SAS, Springer Verlag, to appear.

[2] Kepner, J. L., & Robinson, D. H. (1988). Nonparametric methods for detecting treatment effects in repeated-measures designs. Journal of the American Statistical Association, 83(402), 456-461.

[3] Happ, M., Bathke, A. C., & Brunner, E. (2019). Optimal sample size planning for the Wilcoxon‐Mann‐Whitney test. Statistics in medicine, 38(3), 363-375.

Owner

  • Name: happma
  • Login: happma
  • Kind: user

PHD in statistics | data scientist | actuary

GitHub Events

Total
  • Issues event: 2
  • Watch event: 1
  • Issue comment event: 1
  • Push event: 1
Last Year
  • Issues event: 2
  • Watch event: 1
  • Issue comment event: 1
  • Push event: 1

Committers

Last synced: over 2 years ago

All Time
  • Total Commits: 78
  • Total Committers: 2
  • Avg Commits per committer: 39.0
  • Development Distribution Score (DDS): 0.205
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Martin Happ m****p@a****t 62
Martin Happ m****p@a****t 16
Committer Domains (Top 20 + Academic)
aon.at: 2

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • stikpet (1)
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 32 last-month
  • Total dependent packages: 0
  • Total dependent repositories: 1
  • Total versions: 8
  • Total maintainers: 1
pypi.org: pynonpar

Nonparametric Test Statistics

  • Versions: 8
  • Dependent Packages: 0
  • Dependent Repositories: 1
  • Downloads: 32 Last month
Rankings
Dependent packages count: 10.0%
Dependent repos count: 21.7%
Average: 23.4%
Stargazers count: 25.0%
Forks count: 29.8%
Downloads: 30.3%
Maintainers (1)
Last synced: 6 months ago

Dependencies

requirements.txt pypi
  • codecov *
  • numba *
  • numpy *
  • pandas *
  • pytest *
  • pytest-cov <2.6.0
  • scipy *
setup.py pypi
  • codecov *
  • numba *
  • numpy *
  • pandas *
  • pytest *
  • pytest-cov *
  • scipy *