https://github.com/dissco/dissco-core-backend

Generic backend for DiSSCo service. Provides API's for both external users as the frontend

https://github.com/dissco/dissco-core-backend

Science Score: 36.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: zenodo.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.2%) to scientific vocabulary
Last synced: 4 months ago · JSON representation

Repository

Generic backend for DiSSCo service. Provides API's for both external users as the frontend

Basic Info
  • Host: GitHub
  • Owner: DiSSCo
  • License: apache-2.0
  • Language: Java
  • Default Branch: main
  • Size: 1.62 MB
Statistics
  • Stars: 1
  • Watchers: 2
  • Forks: 0
  • Open Issues: 1
  • Releases: 6
Created almost 4 years ago · Last pushed 5 months ago
Metadata Files
Readme License

README.md

DOI

Backend DiSSCo

The DiSSCo backend provides APIs which can be used by other applications, such as DiSSCover. It handles the logic and preprocesses the items for serialization. In general, the find/search APIs are open, and APIs for create, update and delete are protected by Oauth (Keycloak).

All endpoints are based on the JSON:API specification. It follows the guidelines and best practices described in BiCIKL Deliverable 1.3

The backend provides APIs for the following objects: - Digital Specimens - Digital Media Objects - Annotations

Storage solutions

In general, there are three places where data is stored: - Postgres database: The Postgres database stores the latest active version of the object. It is used in retrieving a specific object based on the id. It can also be used to combine objects based on their relationships.

  • Elasticsearch: This data storage is used for searching and aggregating. It is used for data discovery and provides endpoints for the filters and search fields. In general, it does not return a single object, but a paginated list of objects. Additionally, it can provide aggregations showing how many items comply to the search criteria.

  • MongoDB: MongoDB is used for provenance storage and stores the historical data. This data storage is used for displaying previous versions of the data.

API documentation

The APIs are documented through code-generated OpenAPI v3 and Swagger. - The swagger endpoint is: https://sandbox.dissco.tech/api/swagger-ui/index.html#/ - The OpenAPI endpoint is: https://sandbox.dissco.tech/api/v3/api-docs

Digital Specimens

For Digital Specimen, we only provide search and read functionality through the APIs. It follows a generic structure. {endpoint}/{prefix}/{suffix} Optionally, this is followed by a version /{version} for a specific version. It can also be followed by a particular view on the data such as /full The full endpoint provides all specimen information including all connected information. This means all annotations, all connected digital media objects and all annotations on the connected digital media objects. We also provide a JSON-LD view on the data on the /jsonld endpoint. This endpoint does not comply the JSON:API standard, but follows the JSON-LD implementation.

Additionally, there are several aggregation and search endpoints. In general, they can by filtered by using the following structure. /search?country=Netherlands&midsLevel=1&country=Finland Multiple key-value pairs can be used. When one key should have multiple values, the same key can be repeated (see country in the example). This provides a generic way to filter searches and aggregations. For all terms on which can be filtered see the MappingTerms class. This class provides a list of terms with their simplified name (used in the endpoint) and their full name (used in Elasticsearch).

Digital Media Objects

For Digital Media Objects, we provide read functionality through the APIs. Just as the Digital Specimen endpoints it follows the structure. {endpoint}/{prefix}/{suffix} Optionally, this is followed by a version /{version} for a specific version. There is a limited set of endpoints as most search will happen through the digital specimen.

Annotations

For annotations, we provide read, create, update and tombstone functionality through the APIs. Just as the Digital Specimen endpoints it follows the structure. {endpoint}/{prefix}/{suffix} Optionally, this is followed by a version /{version} for a specific version. There are limited set of endpoints as most search will happen through the digital specimen.

The create, update and tombstone endpoints are protected through authentication. The actions posted to these endpoints are forwarded to the annotation processor. This processor manages all the annotations and is the only application authorized to create or change annotations.

Run locally

To run the system locally, it can be run from an IDEA. Clone the code and fill in the application properties (see below). The application requires a connection to an elastic search instance and a mongodb instance. The application needs a connection to a Postgres database, MongoDB and Elasticsearch. For creation and modification of annotations it needs a reachable annotation processor service.

Run as Container

The application can also be run as container. It will require the environmental values described below. The container can be built with the Dockerfile, which can be found in the root of the project.

Environmental variables

The following backend specific properties can be configured:

```

Database properties

spring.datasource.url=# The JDBC url to the PostgreSQL database to connect with spring.datasource.username=# The login username to use for connecting with the database spring.datasource.password=# The login password to use for connecting with the database

Elasticsearch properties

elasticsearch.hostname=# The hostname of the Elasticsearch cluster elasticsearch.port=# The port of the Elasticsearch cluster

Oauth properties

spring.security.oauth2.resourceserver.jwt.issuer-uri=# The URI to the JWT issuer spring.security.oauth2.authorizationserver.endpoint.jwk-set-uri=# The URI to the JWT OpenId certifications token.secret=# Keycloak secret token.id=# Keycloak client id token.grant-type= # Keycloak grant type

RabbitMQ properties

rabbitmq.mas-exchange-name=# Default value is mas-exchange, can be overwritten spring.rabbitmq.username=# Username to connect to RabbitMQ spring.rabbitmq.password=# Password to connect to RabbitMQ spring.rabbitmq.host=# Hostname of RabbitMQ

MongoDB properties

mongo.connection-string=# Connection string to MongoDB mongo.database=# Database name of MongoDB

Feign clients

feign.annotations=# Path to annotation proccessor endpoint feign.mas=# Path to MAS endpoint

Endpoints

endpoint.handle-endpoint=# Endpoint to handle API endpoint.token-endpoint=# Endpoint to keycloak authenticator

Application Properties

application.base-url=# The url of the application (used to build JsonApiLinks objects)

Owner

  • Name: DiSSCo
  • Login: DiSSCo
  • Kind: organization
  • Email: info@dissco.eu
  • Location: Europe

Distributed System of Scientific Collections - pan-European Research Infrastructure. Updates on DiSSCo and natural science collections

GitHub Events

Total
  • Release event: 4
  • Watch event: 1
  • Delete event: 16
  • Issue comment event: 83
  • Push event: 91
  • Pull request event: 69
  • Pull request review event: 78
  • Pull request review comment event: 44
  • Create event: 66
Last Year
  • Release event: 4
  • Watch event: 1
  • Delete event: 16
  • Issue comment event: 83
  • Push event: 91
  • Pull request event: 69
  • Pull request review event: 78
  • Pull request review comment event: 44
  • Create event: 66

Issues and Pull Requests

Last synced: 5 months ago

All Time
  • Total issues: 0
  • Total pull requests: 78
  • Average time to close issues: N/A
  • Average time to close pull requests: 3 days
  • Total issue authors: 0
  • Total pull request authors: 3
  • Average comments per issue: 0
  • Average comments per pull request: 0.83
  • Merged pull requests: 59
  • Bot issues: 0
  • Bot pull requests: 1
Past Year
  • Issues: 0
  • Pull requests: 38
  • Average time to close issues: N/A
  • Average time to close pull requests: 2 days
  • Issue authors: 0
  • Pull request authors: 2
  • Average comments per issue: 0
  • Average comments per pull request: 0.89
  • Merged pull requests: 30
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
  • southeo (54)
  • samleeflang (23)
  • dependabot[bot] (1)
Top Labels
Issue Labels
Pull Request Labels
dependencies (1)

Dependencies

pom.xml maven
  • org.testcontainers:testcontainers-bom 1.16.2 import
  • co.elastic.clients:elasticsearch-java 8.3.1
  • com.fasterxml.jackson.core:jackson-core
  • com.fasterxml.jackson.core:jackson-databind
  • jakarta.json:jakarta.json-api 2.1.0
  • org.postgresql:postgresql
  • org.projectlombok:lombok
  • org.springdoc:springdoc-openapi-ui 1.6.8
  • org.springframework.boot:spring-boot-configuration-processor
  • org.springframework.boot:spring-boot-starter-actuator
  • org.springframework.boot:spring-boot-starter-jooq
  • org.springframework.boot:spring-boot-starter-validation
  • org.springframework.boot:spring-boot-starter-web
  • org.flywaydb:flyway-core test
  • org.springframework.boot:spring-boot-starter-test test
  • org.testcontainers:junit-jupiter test
  • org.testcontainers:postgresql test
  • org.testcontainers:testcontainers test
.github/workflows/build.yaml actions
  • actions/cache v1 composite
  • actions/checkout v2 composite
  • actions/setup-java v1 composite
  • anothrNick/github-tag-action 1.36.0 composite
  • docker/build-push-action v3 composite
  • docker/login-action v1 composite
  • docker/metadata-action v4 composite
Dockerfile docker
  • openjdk 17-slim build