pytorch-logit-logic

Logit-space logical activation functions for pytorch

https://github.com/dalhousieai/pytorch-logit-logic

Science Score: 31.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
  • .zenodo.json file
  • DOI references
    Found 3 DOI reference(s) in README
  • Academic publication links
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (9.5%) to scientific vocabulary

Keywords

activation activation-function activation-functions logit python pytorch xnor xor
Last synced: 4 months ago · JSON representation ·

Repository

Logit-space logical activation functions for pytorch

Basic Info
  • Host: GitHub
  • Owner: DalhousieAI
  • License: mit
  • Language: Python
  • Default Branch: master
  • Homepage:
  • Size: 1.63 MB
Statistics
  • Stars: 1
  • Watchers: 2
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Topics
activation activation-function activation-functions logit python pytorch xnor xor
Created about 3 years ago · Last pushed almost 3 years ago
Metadata Files
Readme Changelog License Citation

README.rst

Pytorch Logit Logic
===================

A pytorch extension which provides functions and classes for logit-space operators
equivalent to probabilistic Boolean logic-gates AND, OR, and XNOR for independent probabilities.

This provides the activation functions used in our paper:

    SC Lowe, R Earle, J d'Eon, T Trappenberg, S Oore (2022). Logical Activation Functions: Logit-space equivalents of Probabilistic Boolean Operators. In *Advances in Neural Information Processing Systems*, volume 36.
    doi: |nbsp| `10.48550/arxiv.2110.11940 `_.

.. _doi: https://www.doi.org/10.48550/arxiv.2110.11940


For your convenience, we provide a copy of this citation in `bibtex`_ format.

.. _bibtex: https://raw.githubusercontent.com/DalhousieAI/pytorch-logit-logic/master/CITATION.bib


Example usage:

.. code:: python

    from pytorch_logit_logic import actfun_name2factory
    from torch import nn


    class MLP(nn.Module):
        """
        A multi-layer perceptron which supports higher-dimensional activations.

        Parameters
        ----------
        in_channels : int
            Number of input channels.
        out_channels : int
            Number of output channels.
        n_layer : int, default=1
            Number of hidden layers.
        hidden_width : int, optional
            Pre-activation width. Default: same as ``in_channels``.
            Note that the actual pre-act width used may differ by rounding to
            the nearest integer that is divisible by the activation function's
            divisor.
        actfun : str, default="ReLU"
            Name of activation function to use.
        actfun_k : int, optional
            Dimensionality of the activation function. Default is the lowest
            ``k`` that the activation function supports, i.e. ``1`` for regular
            1D activation functions like ReLU, and ``2`` for GLU, MaxOut, and
            NAIL_OR.
        """

        def __init__(
            self,
            in_channels,
            out_channels,
            n_layer=1,
            hidden_width=None,
            actfun="ReLU",
            actfun_k=None,
        ):
            super().__init__()

            # Create a factory that generates objects that perform this activation
            actfun_factory = actfun_name2factory(actfun, k=actfun_k)
            # Get the divisor and space reduction factors for this activation
            # function. The pre-act needs to be divisible by the divisor, and
            # the activation will change the channel dimension by feature_factor.
            _actfun = actfun_factory()
            divisor = getattr(_actfun, "k", 1)
            feature_factor = getattr(_actfun, "feature_factor", 1)

            if hidden_width is None:
                hidden_width = in_channels

            # Ensure the hidden width is divisible by the divisor
            hidden_width = int(int(round(hidden_width / divisor)) * divisor)

            layers = []
            n_current = in_channels
            for i_layer in range(0, n_layer):
                layer = []
                layer.append(nn.Linear(n_current, hidden_width))
                n_current = hidden_width
                layer.append(actfun_factory())
                n_current = int(round(n_current * feature_factor))
                layers.append(nn.Sequential(*layer))
            self.layers = nn.Sequential(*layers)
            self.classifier = nn.Linear(n_current, out_channels)

        def forward(self, x):
            x = self.layers(x)
            x = self.classifier(x)
            return x


    model = MLP(
        in_channels=512,
        out_channels=10,
        n_layer=2,
        actfun="nail_or",
    )



.. |nbsp| unicode:: 0xA0
   :trim:

Owner

  • Name: DalhousieAI
  • Login: DalhousieAI
  • Kind: organization
  • Location: Halifax, Nova Scotia

Hierarchical Anticipatory Learning Laboratory and Associates

Citation (CITATION.bib)

@inproceedings{Lowe2022,
  author    = {Scott C. Lowe and
               Robert Earle and
               Jason d'Eon and
               Thomas Trappenberg and
               Sageev Oore},
  title     = {Logical Activation Functions: Logit-space equivalents of Probabilistic Boolean Operators},
  booktitle = {Advances in Neural Information Processing Systems},
  volume    = {36},
  year      = {2022},
  pages     = {},
  publisher = {Curran Associates, Inc.},
  address   = {Red Hook, NY, USA},
  url       = {https://arxiv.org/abs/2110.11940},
  eprinttype = {arXiv},
  eprint    = {2110.11940},
  doi       = {10.48550/arxiv.2110.11940}
}

GitHub Events

Total
Last Year

Committers

Last synced: almost 3 years ago

All Time
  • Total Commits: 10
  • Total Committers: 1
  • Avg Commits per committer: 10.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Scott Lowe s****e@g****m 10

Issues and Pull Requests

Last synced: over 1 year ago

All Time
  • Total issues: 0
  • Total pull requests: 1
  • Average time to close issues: N/A
  • Average time to close pull requests: 4 minutes
  • Total issue authors: 0
  • Total pull request authors: 1
  • Average comments per issue: 0
  • Average comments per pull request: 0.0
  • Merged pull requests: 1
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
  • scottclowe (1)
Top Labels
Issue Labels
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 11 last-month
  • Total dependent packages: 0
  • Total dependent repositories: 0
  • Total versions: 1
  • Total maintainers: 1
pypi.org: pytorch-logit-logic

Logit-space logical activation functions for pytorch

  • Versions: 1
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 11 Last month
Rankings
Dependent packages count: 6.6%
Forks count: 30.5%
Dependent repos count: 30.6%
Average: 33.6%
Stargazers count: 39.1%
Downloads: 61.0%
Maintainers (1)
Last synced: 4 months ago

Dependencies

.github/workflows/docs.yaml actions
  • actions/checkout v3 composite
  • ammaraskar/sphinx-action master composite
  • docker://pandoc/core 2.9 composite
  • peaceiris/actions-gh-pages v3 composite
.github/workflows/pre-commit.yaml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
  • pre-commit/action v3.0.0 composite
requirements-dev.txt pypi
  • black ==22.10.0 development
  • identify >=1.4.20 development
  • pre-commit * development
requirements-docs.txt pypi
  • myst-parser *
  • pypandoc >=1.6.3
  • readthedocs-sphinx-search *
  • sphinx >=3.5.4
  • sphinx-autobuild *
  • sphinx_book_theme *
  • watchdog <1.0.0
requirements.txt pypi
  • torch *