one-conf
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (10.2%) to scientific vocabulary
Repository
Basic Info
- Host: GitHub
- Owner: fladens
- Language: Vue
- Default Branch: main
- Size: 382 KB
Statistics
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
- Releases: 0
Metadata Files
README.md
Conference app to test hybrid cloud for benefit of end users
Architecture

The Hybrid cloud contains a Public and private cloud. They both run the same microservices:
Backend
API
For the API we currently have 3 services: User, Conference, Keynote.
They all work with the same database (though it would be trivial giving each its own database).
Currently we have a basic CRUD with some search and find routes.
These services are the same for the public and private cloud. They should not be interested in from where they are called and should work the same way. They just might accept different fields for documents.
Syncing
Because I need to sync the Private cloud and the Public Cloud, I have the sync-service. This microservice is responsible to stay connected to the RabbitMQ from the public cloud and the database running in the same kubernetes environment.
It then listens to any changes coming in from either RabbitMQ or MongoDB. All MongoDB changes will be filtered for fields that shouldn't leave the private cloud, and all other fields will be forwarded to RabbitMQ. All other sync-services listening on RabbitMQ take these changes and push them to their local database.
Frontend
App
For the frontend I have a running nginx server which will return the bundled app.
Tools
Docker
All the microservices for this project are build with Docker, this makes it more easy to deploy and move them to different environments. Docker basically builds images which then can be pulled and run everywhere as containers. To orchestrate these contaieners in the public and private cloud I used Kubernetes:
Kubernetes
Kubernetes (also called k8s) is an orchastrator for container. As it is open source and is used a lot by the industry it was easy to find documentation and support. For these reasons it was trivial to find a k8s engine to run it on a private running server as well as on a public cloud provider.
k3s
k3s was built specifically for Edge Computing, it was perfect for the private cloud of this project. The private kubernetes cluster runs on a mini PC with Debian. This k8s distribution was easy to setup and ran without any problems for the last months.
Google Cloud with GKE
As Google is also the company behind Kubernetes they have the one of the best kubernetes engines for a public cloud. The setup was quite straight forward, though I had a few problems with the configuration of the template files and had to differantiate between public and private. Mainly because the ingress on GKE works differently than on k3s.
Else Google Cloud was a good choice as I could use a student account and just pause the pods when I was not using them. This was done by running the helper script I wrote /scripts/cloud/pause.sh which will put the replicas to 0 so no node is running.
This was then easy to start again by running /scripts/cloud/start.sh and I could directly start working again.
Helm
Helm is a package manager for kubernetes. It simplifies deploying to kubernetes with different values and settings for an environment. Because this it was a good choice for this project, as it would simplify the deployment between public and private cloud, with as much reusable configs as possible.
The packages deployed by Helm are called Charts, these Charts can have dependencies. And also need to be pushed to a repository, for this I used HelmBay, a free Helm registry, which I will describe next.
HelmBay
Free Repository for Helm Charts. Just a repository to push Helm Charts and pull them from everywhere. This way it is easy to deploy Helm Charts from everywhere and have specific Versions.
Github packages
Registry for NPM packages and Docker images.
I needed a free way to push and pull my NPM packages and Docker images. As Github has a student account which gives a lot of free usable features I decided to use Github packages.
This way also writing Github actions that build and push docker images to github packages was very easy.
Github actions
Continuous Integration (also: CI) which is directly configured by pushing yaml files to /.github/workflows and has for students quite a bit free computing power.
As I also have my packages and source code in Github this was straight forward to setup and use.
bind9
Package for linux that runs a DNS server locally.
I ran this on a raspberry pi with Debian to forward all requests to www.schnider.io to the private cloud, when the user was connected to the private Wifi.
k9s
Tool to manage kubernetes running directly in the terminal. With this tool I was able to directly access kubernetes resources and interact with them, through a terminal GUI. This tool supercharged debugging and working with kubernetes in multiple contexts.
NodeJS
For microservices I used NodeJS. It is the most used javascript engine for server and I was already familiar with it, so it was a good choice to make good progress.
ExpressJS
Express is a minimilist web framework, which can be used to build a simple API, like I did in this project. It also has support for many plugins like Passport, which I used for authentication.
MongoDB
For databases I used MongoDB, as it basically is a big JSON document it is perfect for microservices and rapid development. It also provides ChangeStreams which are perfect for the SyncService I was writing. Additionally does it use minimal resources.
MongoDB Compass
A GUI to work with MongoDB.
RabbitMQ
As we used RabbitMQ also in the Project in the last year, and it is the only MessageBroker I worked with, I again chose RabbitMQ. It also has a great NodeJS package which simplified working with it immense.
I also found a Helm Chart for it so I could add it as dependency to my public Cloud.
Vue
Vue is a widely used frontend library that I am personally very familiar with. It comes with a cli that makes it easy to setup and run a simple app.
PrimeVue + PrimeFlex
UI library for vue components To keep development time on the frontend as low as possible I decided to use two UI libraries that would provide most of the components I will use. These two libraries were suggested by multiple blog articles.
These libraries were exhausting to work with at first, but with time I got more used to them. But they felt like a blocker at first.
asdf
Package manager for different tools
This tool helps always using the correct version of a tool, in my case nodeJS. It is possible to set which version should be used in the .tool-versions file.
As I use multiple projects on my machine it helped always using the same version in this project.
Performance
Comparison of the performance of requests to Public Cloud and Private Cloud.
Ping
Most basic way to measure the round trip of a packet.

Private
bash
ping private.schnider.io -c 20
```log PING private.schnider.io (192.168.178.30): 56 data bytes 64 bytes from 192.168.178.30: icmpseq=0 ttl=64 time=2.279 ms 64 bytes from 192.168.178.30: icmpseq=1 ttl=64 time=7.994 ms 64 bytes from 192.168.178.30: icmpseq=2 ttl=64 time=6.101 ms 64 bytes from 192.168.178.30: icmpseq=3 ttl=64 time=4.645 ms 64 bytes from 192.168.178.30: icmpseq=4 ttl=64 time=7.125 ms 64 bytes from 192.168.178.30: icmpseq=5 ttl=64 time=3.080 ms 64 bytes from 192.168.178.30: icmpseq=6 ttl=64 time=7.440 ms 64 bytes from 192.168.178.30: icmpseq=7 ttl=64 time=3.150 ms 64 bytes from 192.168.178.30: icmpseq=8 ttl=64 time=5.511 ms 64 bytes from 192.168.178.30: icmpseq=9 ttl=64 time=2.573 ms 64 bytes from 192.168.178.30: icmpseq=10 ttl=64 time=2.888 ms 64 bytes from 192.168.178.30: icmpseq=11 ttl=64 time=9.819 ms 64 bytes from 192.168.178.30: icmpseq=12 ttl=64 time=9.396 ms 64 bytes from 192.168.178.30: icmpseq=13 ttl=64 time=4.841 ms 64 bytes from 192.168.178.30: icmpseq=14 ttl=64 time=2.962 ms 64 bytes from 192.168.178.30: icmpseq=15 ttl=64 time=2.557 ms 64 bytes from 192.168.178.30: icmpseq=16 ttl=64 time=11.096 ms 64 bytes from 192.168.178.30: icmpseq=17 ttl=64 time=2.902 ms 64 bytes from 192.168.178.30: icmpseq=18 ttl=64 time=5.543 ms 64 bytes from 192.168.178.30: icmpseq=19 ttl=64 time=13.861 ms
--- private.schnider.io ping statistics --- 20 packets transmitted, 20 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 2.279/5.788/13.861/3.214 ms ```
Public
bash
ping public.schnider.io -c 20
```log PING public.schnider.io (34.117.87.109): 56 data bytes 64 bytes from 34.117.87.109: icmpseq=0 ttl=115 time=40.382 ms 64 bytes from 34.117.87.109: icmpseq=1 ttl=115 time=29.549 ms 64 bytes from 34.117.87.109: icmpseq=2 ttl=115 time=27.857 ms 64 bytes from 34.117.87.109: icmpseq=3 ttl=115 time=47.751 ms 64 bytes from 34.117.87.109: icmpseq=4 ttl=115 time=28.147 ms 64 bytes from 34.117.87.109: icmpseq=5 ttl=115 time=32.871 ms 64 bytes from 34.117.87.109: icmpseq=6 ttl=115 time=28.862 ms 64 bytes from 34.117.87.109: icmpseq=7 ttl=115 time=43.283 ms 64 bytes from 34.117.87.109: icmpseq=8 ttl=115 time=45.658 ms 64 bytes from 34.117.87.109: icmpseq=9 ttl=115 time=32.713 ms 64 bytes from 34.117.87.109: icmpseq=10 ttl=115 time=30.303 ms 64 bytes from 34.117.87.109: icmpseq=11 ttl=115 time=25.943 ms 64 bytes from 34.117.87.109: icmpseq=12 ttl=115 time=52.985 ms 64 bytes from 34.117.87.109: icmpseq=13 ttl=115 time=32.745 ms 64 bytes from 34.117.87.109: icmpseq=14 ttl=115 time=67.588 ms 64 bytes from 34.117.87.109: icmpseq=15 ttl=115 time=131.758 ms 64 bytes from 34.117.87.109: icmpseq=16 ttl=115 time=135.997 ms 64 bytes from 34.117.87.109: icmpseq=17 ttl=115 time=29.734 ms 64 bytes from 34.117.87.109: icmpseq=18 ttl=115 time=46.638 ms 64 bytes from 34.117.87.109: icmpseq=19 ttl=115 time=29.327 ms
--- public.schnider.io ping statistics --- 20 packets transmitted, 20 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 25.943/47.005/135.997/30.756 ms ```
iperf3
Iperf provides a way to measure maximum bandwidth, jitter, packet loss, and many more. I also found a docker image for it so it was straight forward deploying it in the public and private cloud via Helm and measuring it.
Bandwidth

Private
bash
iperf3 -c private.schnider.io
```log
Connecting to host private.schnider.io, port 5201
[ 7] local 192.168.178.36 port 61525 connected to 192.168.178.30 port 5201
[ ID] Interval Transfer Bitrate
[ 7] 0.00-1.00 sec 8.34 MBytes 69.9 Mbits/sec
[ 7] 1.00-2.00 sec 5.96 MBytes 50.0 Mbits/sec
[ 7] 2.00-3.00 sec 5.66 MBytes 47.5 Mbits/sec
[ 7] 3.00-4.00 sec 4.22 MBytes 35.4 Mbits/sec
[ 7] 4.00-5.00 sec 5.25 MBytes 44.1 Mbits/sec
[ 7] 5.00-6.00 sec 5.80 MBytes 48.6 Mbits/sec
[ 7] 6.00-7.00 sec 5.23 MBytes 43.9 Mbits/sec
[ 7] 7.00-8.00 sec 4.58 MBytes 38.5 Mbits/sec
[ 7] 8.00-9.00 sec 3.46 MBytes 29.0 Mbits/sec
[ 7] 9.00-10.00 sec 6.06 MBytes 50.8 Mbits/sec
[ ID] Interval Transfer Bitrate [ 7] 0.00-10.00 sec 54.6 MBytes 45.8 Mbits/sec sender [ 7] 0.00-10.16 sec 53.3 MBytes 44.0 Mbits/sec receiver
iperf Done. ```
Public
bash
iperf3 -c 34.118.41.108
```log
Connecting to host 34.118.41.108, port 5201
[ 5] local 192.168.178.36 port 63319 connected to 34.118.41.108 port 5201
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 1.58 MBytes 13.3 Mbits/sec
[ 5] 1.00-2.00 sec 1.62 MBytes 13.6 Mbits/sec
[ 5] 2.00-3.00 sec 1.83 MBytes 15.4 Mbits/sec
[ 5] 3.00-4.00 sec 1.83 MBytes 15.3 Mbits/sec
[ 5] 4.00-5.00 sec 1.71 MBytes 14.4 Mbits/sec
[ 5] 5.00-6.00 sec 2.01 MBytes 16.8 Mbits/sec
[ 5] 6.00-7.00 sec 1.95 MBytes 16.3 Mbits/sec
[ 5] 7.00-8.00 sec 1.84 MBytes 15.4 Mbits/sec
[ 5] 8.00-9.00 sec 1.87 MBytes 15.7 Mbits/sec
[ 5] 9.00-10.00 sec 2.08 MBytes 17.5 Mbits/sec
[ ID] Interval Transfer Bitrate [ 5] 0.00-10.00 sec 18.3 MBytes 15.4 Mbits/sec sender [ 5] 0.00-10.06 sec 18.1 MBytes 15.1 Mbits/sec receiver
iperf Done. ```
Jitter

Private
bash
iperf3 -c private.schnider.io -u -b 1000M
```log
Connecting to host private.schnider.io, port 5201
[ 7] local 192.168.178.36 port 64022 connected to 192.168.178.30 port 5201
[ ID] Interval Transfer Bitrate Total Datagrams
[ 7] 0.00-1.00 sec 11.9 MBytes 99.8 Mbits/sec 59506
[ 7] 1.00-2.00 sec 10.4 MBytes 87.3 Mbits/sec 73036
[ 7] 2.00-3.00 sec 0.00 Bytes 0.00 bits/sec 0
[ 7] 3.00-4.00 sec 10.8 MBytes 90.4 Mbits/sec 65017
[ 7] 4.00-5.00 sec 11.1 MBytes 92.7 Mbits/sec 68317
[ 7] 5.00-6.00 sec 9.94 MBytes 83.4 Mbits/sec 30606
[ 7] 6.00-7.00 sec 554 KBytes 4.54 Mbits/sec 46185
[ 7] 7.00-8.00 sec 0.00 Bytes 0.00 bits/sec 0
[ 7] 8.00-9.00 sec 10.7 MBytes 89.4 Mbits/sec 73017
[ 7] 9.00-10.00 sec 9.15 MBytes 76.8 Mbits/sec 96490
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams [ 7] 0.00-10.00 sec 74.4 MBytes 62.4 Mbits/sec 0.000 ms 0/512174 (0%) sender [ 7] 0.00-10.35 sec 65.6 MBytes 53.1 Mbits/sec 0.626 ms 368866/416349 (89%) receiver
iperf Done. ```
Public
bash
```log
```
OWAMP
Doesn't work currently, but was mostly done with iperf.
Chrome DevTools
Private

Public

Security
Experiments
Lessons Learned
- A lot of tools are needed for a simple Hybrid Cloud setup
- Performance testing of a network itself is either expensive (provider) or very hard to do (open source)
- A simpler application would have made a better playground
Owner
- Name: Fabian Ladenstein
- Login: fladens
- Kind: user
- Repositories: 1
- Profile: https://github.com/fladens
Citation (CITATION.cff)
cff-version: 1.0.0
message: "If you use this software, please cite it as below."
authors:
- family-names: Ladenstein
given-names: Fabian
title: "One conference"
version: 1.0.0
date-released: 2023-02-01
GitHub Events
Total
Last Year
Dependencies
- actions/checkout v3 composite
- docker/build-push-action v3 composite
- docker/login-action v2 composite
- docker/setup-buildx-action v2 composite
- docker/setup-qemu-action v2 composite
- nginx 1.23.3-alpine build
- node 19-alpine build
- node 19-alpine build
- node 19-alpine build
- node 19-alpine build
- node 19-alpine build
- node 19-alpine build
- ubuntu latest build
- 182 dependencies
- @vitejs/plugin-vue ^4.0.0 development
- eslint ^8.22.0 development
- eslint-plugin-vue ^9.3.0 development
- vite ^4.0.0 development
- @vuelidate/core ^2.0.0
- @vuelidate/validators ^2.0.0
- axios ^1.2.2
- lodash.debounce ^4.0.8
- pinia ^2.0.28
- primeflex ^3.3.0
- primeicons ^6.0.1
- primevue ^3.22.3
- vue ^3.2.45
- vue-router ^4.1.6
- @schnider94/app ^1.0.0
- @schnider94/database ^1.0.5
- @schnider94/jwt-middleware ^1.0.2
- @schnider94/models ^1.0.1
- @schnider94/server ^1.0.2
- express ^4.18.2
- passport ^0.4.1
- 225 dependencies
- @schnider94/app ^1.0.0
- @schnider94/database ^1.0.9
- @schnider94/jwt-middleware ^1.0.2
- @schnider94/models ^1.0.7
- @schnider94/server ^1.0.4
- express ^4.18.2
- passport ^0.4.1
- 225 dependencies
- @schnider94/app ^1.0.0
- @schnider94/database ^1.0.9
- @schnider94/jwt-middleware ^1.0.2
- @schnider94/models ^1.0.7
- @schnider94/server ^1.0.4
- express ^4.18.2
- passport ^0.4.1
- 169 dependencies
- @schnider94/app ^1.0.0
- @schnider94/database ^1.0.9
- @schnider94/models ^1.0.2
- amqplib ^0.10.3
- 253 dependencies
- @schnider94/app ^1.0.0
- @schnider94/database ^1.0.9
- @schnider94/jwt-middleware ^1.0.2
- @schnider94/models ^1.0.8
- @schnider94/server ^1.0.4
- express ^4.18.2
- jsonwebtoken ^8.5.1
- mongoose ^6.8.2
- passport ^0.4.1
- passport-local ^1.0.0
- @aws-crypto/ie11-detection 2.0.2
- @aws-crypto/sha256-browser 2.0.0
- @aws-crypto/sha256-js 2.0.0
- @aws-crypto/supports-web-crypto 2.0.2
- @aws-crypto/util 2.0.2
- @aws-sdk/abort-controller 3.226.0
- @aws-sdk/client-cognito-identity 3.245.0
- @aws-sdk/client-sso 3.245.0
- @aws-sdk/client-sso-oidc 3.245.0
- @aws-sdk/client-sts 3.245.0
- @aws-sdk/config-resolver 3.234.0
- @aws-sdk/credential-provider-cognito-identity 3.245.0
- @aws-sdk/credential-provider-env 3.226.0
- @aws-sdk/credential-provider-imds 3.226.0
- @aws-sdk/credential-provider-ini 3.245.0
- @aws-sdk/credential-provider-node 3.245.0
- @aws-sdk/credential-provider-process 3.226.0
- @aws-sdk/credential-provider-sso 3.245.0
- @aws-sdk/credential-provider-web-identity 3.226.0
- @aws-sdk/credential-providers 3.245.0
- @aws-sdk/fetch-http-handler 3.226.0
- @aws-sdk/hash-node 3.226.0
- @aws-sdk/invalid-dependency 3.226.0
- @aws-sdk/is-array-buffer 3.201.0
- @aws-sdk/middleware-content-length 3.226.0
- @aws-sdk/middleware-endpoint 3.226.0
- @aws-sdk/middleware-host-header 3.226.0
- @aws-sdk/middleware-logger 3.226.0
- @aws-sdk/middleware-recursion-detection 3.226.0
- @aws-sdk/middleware-retry 3.235.0
- @aws-sdk/middleware-sdk-sts 3.226.0
- @aws-sdk/middleware-serde 3.226.0
- @aws-sdk/middleware-signing 3.226.0
- @aws-sdk/middleware-stack 3.226.0
- @aws-sdk/middleware-user-agent 3.226.0
- @aws-sdk/node-config-provider 3.226.0
- @aws-sdk/node-http-handler 3.226.0
- @aws-sdk/property-provider 3.226.0
- @aws-sdk/protocol-http 3.226.0
- @aws-sdk/querystring-builder 3.226.0
- @aws-sdk/querystring-parser 3.226.0
- @aws-sdk/service-error-classification 3.229.0
- @aws-sdk/shared-ini-file-loader 3.226.0
- @aws-sdk/signature-v4 3.226.0
- @aws-sdk/smithy-client 3.234.0
- @aws-sdk/token-providers 3.245.0
- @aws-sdk/types 3.226.0
- @aws-sdk/url-parser 3.226.0
- @aws-sdk/util-base64 3.208.0
- @aws-sdk/util-body-length-browser 3.188.0
- @aws-sdk/util-body-length-node 3.208.0
- @aws-sdk/util-buffer-from 3.208.0
- @aws-sdk/util-config-provider 3.208.0
- @aws-sdk/util-defaults-mode-browser 3.234.0
- @aws-sdk/util-defaults-mode-node 3.234.0
- @aws-sdk/util-endpoints 3.245.0
- @aws-sdk/util-hex-encoding 3.201.0
- @aws-sdk/util-locate-window 3.208.0
- @aws-sdk/util-middleware 3.226.0
- @aws-sdk/util-retry 3.229.0
- @aws-sdk/util-uri-escape 3.201.0
- @aws-sdk/util-user-agent-browser 3.226.0
- @aws-sdk/util-user-agent-node 3.226.0
- @aws-sdk/util-utf8-browser 3.188.0
- @aws-sdk/util-utf8-node 3.208.0
- @types/node 18.11.18
- @types/webidl-conversions 7.0.0
- @types/whatwg-url 8.2.2
- base64-js 1.5.1
- bowser 2.11.0
- bson 4.7.1
- buffer 5.7.1
- debug 4.3.4
- fast-xml-parser 4.0.11
- ieee754 1.2.1
- ip 2.0.0
- kareem 2.5.1
- memory-pager 1.5.0
- mongodb 4.12.1
- mongodb-connection-string-url 2.6.0
- mongoose 6.8.3
- mpath 0.9.0
- mquery 4.0.3
- ms 2.1.2
- ms 2.1.3
- punycode 2.1.1
- saslprep 1.0.3
- sift 16.0.1
- smart-buffer 4.2.0
- socks 2.7.1
- sparse-bitfield 3.0.3
- strnum 1.0.5
- tr46 3.0.0
- tslib 1.14.1
- tslib 2.4.1
- uuid 8.3.2
- webidl-conversions 7.0.0
- whatwg-url 11.0.0
- mongoose ^6.8.2
- buffer-equal-constant-time 1.0.1
- ecdsa-sig-formatter 1.0.11
- jsonwebtoken 9.0.0
- jwa 1.4.1
- jws 3.2.2
- lodash 4.17.21
- lru-cache 6.0.0
- ms 2.1.3
- passport 0.4.1
- passport-jwt 4.0.1
- passport-strategy 1.0.0
- pause 0.0.1
- safe-buffer 5.2.1
- semver 7.3.8
- yallist 4.0.0
- passport ^0.4.1
- passport-jwt ^4.0.1
- 155 dependencies
- bcrypt ^5.1.0
- mongoose ^6.8.2
- accepts 1.3.8
- array-flatten 1.1.1
- body-parser 1.20.1
- bytes 3.1.2
- call-bind 1.0.2
- content-disposition 0.5.4
- content-type 1.0.4
- cookie 0.5.0
- cookie-signature 1.0.6
- cors 2.8.5
- debug 2.6.9
- depd 2.0.0
- destroy 1.2.0
- ee-first 1.1.1
- encodeurl 1.0.2
- escape-html 1.0.3
- etag 1.8.1
- express 4.18.2
- finalhandler 1.2.0
- forwarded 0.2.0
- fresh 0.5.2
- function-bind 1.1.1
- get-intrinsic 1.1.3
- has 1.0.3
- has-symbols 1.0.3
- http-errors 2.0.0
- iconv-lite 0.4.24
- inherits 2.0.4
- ipaddr.js 1.9.1
- media-typer 0.3.0
- merge-descriptors 1.0.1
- methods 1.1.2
- mime 1.6.0
- mime-db 1.52.0
- mime-types 2.1.35
- ms 2.0.0
- ms 2.1.3
- negotiator 0.6.3
- object-assign 4.1.1
- object-inspect 1.12.2
- on-finished 2.4.1
- parseurl 1.3.3
- path-to-regexp 0.1.7
- proxy-addr 2.0.7
- qs 6.11.0
- range-parser 1.2.1
- raw-body 2.5.1
- safe-buffer 5.2.1
- safer-buffer 2.1.2
- send 0.18.0
- serve-static 1.15.0
- setprototypeof 1.2.0
- side-channel 1.0.4
- statuses 2.0.1
- toidentifier 1.0.1
- type-is 1.6.18
- unpipe 1.0.0
- utils-merge 1.0.1
- vary 1.1.2
- body-parser ^1.19.0
- cors ^2.8.5
- express ^4.18.2