Coder Social home page Coder Social logo

juice-shop / multi-juicer Goto Github PK

View Code? Open in Web Editor NEW
262.0 17.0 117.0 8.46 MB

Host and manage multiple Juice Shop instances for security trainings and Capture The Flags

License: Apache License 2.0

Dockerfile 1.37% JavaScript 78.23% HTML 1.08% Go 15.99% Mustache 2.34% Shell 1.01%
security juice-shop owasp capture-the-flag ctf-platform hacking kubernetes hacktoberfest

multi-juicer's Introduction

Juice Shop Logo OWASP Juice Shop

OWASP Flagship GitHub release Twitter Follow Subreddit subscribers

CI/CD Pipeline Test Coverage Maintainability Code Climate technical debt Cypress tests OpenSSF Best Practices GitHub stars Contributor Covenant

The most trustworthy online shop out there. (@dschadow) — The best juice shop on the whole internet! (@shehackspurple) — Actually the most bug-free vulnerable application in existence! (@vanderaj) — First you 😂😂then you 😢 (@kramse) — But this doesn't have anything to do with juice. (@coderPatros' wife)

OWASP Juice Shop is probably the most modern and sophisticated insecure web application! It can be used in security trainings, awareness demos, CTFs and as a guinea pig for security tools! Juice Shop encompasses vulnerabilities from the entire OWASP Top Ten along with many other security flaws found in real-world applications!

Juice Shop Screenshot Slideshow

For a detailed introduction, full list of features and architecture overview please visit the official project page: https://owasp-juice.shop

Table of contents

Setup

You can find some less common installation variations in the Running OWASP Juice Shop documentation.

From Sources

GitHub repo size

  1. Install node.js
  2. Run git clone https://github.com/juice-shop/juice-shop.git --depth 1 (or clone your own fork of the repository)
  3. Go into the cloned folder with cd juice-shop
  4. Run npm install (only has to be done before first start or when you change the source code)
  5. Run npm start
  6. Browse to http://localhost:3000

Packaged Distributions

GitHub release SourceForge SourceForge

  1. Install a 64bit node.js on your Windows, MacOS or Linux machine
  2. Download juice-shop-<version>_<node-version>_<os>_x64.zip (or .tgz) attached to latest release
  3. Unpack and cd into the unpacked folder
  4. Run npm start
  5. Browse to http://localhost:3000

Each packaged distribution includes some binaries for sqlite3 and libxmljs bound to the OS and node.js version which npm install was executed on.

Docker Container

Docker Pulls Docker Stars

  1. Install Docker
  2. Run docker pull bkimminich/juice-shop
  3. Run docker run --rm -p 127.0.0.1:3000:3000 bkimminich/juice-shop
  4. Browse to http://localhost:3000 (on macOS and Windows browse to http://192.168.99.100:3000 if you are using docker-machine instead of the native docker installation)

Vagrant

  1. Install Vagrant and Virtualbox
  2. Run git clone https://github.com/juice-shop/juice-shop.git (or clone your own fork of the repository)
  3. Run cd vagrant && vagrant up
  4. Browse to 192.168.56.110

Amazon EC2 Instance

  1. In the EC2 sidenav select Instances and click Launch Instance
  2. In Step 1: Choose an Amazon Machine Image (AMI) choose an Amazon Linux AMI or Amazon Linux 2 AMI
  3. In Step 3: Configure Instance Details unfold Advanced Details and copy the script below into User Data
  4. In Step 6: Configure Security Group add a Rule that opens port 80 for HTTP
  5. Launch your instance
  6. Browse to your instance's public DNS
#!/bin/bash
yum update -y
yum install -y docker
service docker start
docker pull bkimminich/juice-shop
docker run -d -p 80:3000 bkimminich/juice-shop

Azure Container Instance

  1. Open and login (via az login) to your Azure CLI or login to the Azure Portal, open the CloudShell and then choose Bash (not PowerShell).
  2. Create a resource group by running az group create --name <group name> --location <location name, e.g. "centralus">
  3. Create a new container by running az container create --resource-group <group name> --name <container name> --image bkimminich/juice-shop --dns-name-label <dns name label> --ports 3000 --ip-address public
  4. Your container will be available at http://<dns name label>.<location name>.azurecontainer.io:3000

Google Compute Engine Instance

  1. Login to the Google Cloud Console and open Cloud Shell.
  2. Launch a new GCE instance based on the juice-shop container. Take note of the EXTERNAL_IP provided in the output.
gcloud compute instances create-with-container owasp-juice-shop-app --container-image bkimminich/juice-shop
  1. Create a firewall rule that allows inbound traffic to port 3000
gcloud compute firewall-rules create juice-rule --allow tcp:3000
  1. Your container is now running and available at http://<EXTERNAL_IP>:3000/

Heroku

  1. Sign up to Heroku and log in to your account
  2. Click the button below and follow the instructions

Deploy

If you have forked the Juice Shop repository on GitHub, the Deploy to Heroku button will deploy your forked version of the application.

Demo

Feel free to have a look at the latest version of OWASP Juice Shop: http://demo.owasp-juice.shop

This is a deployment-test and sneak-peek instance only! You are not supposed to use this instance for your own hacking endeavours! No guaranteed uptime! Guaranteed stern looks if you break it!

Documentation

Node.js version compatibility

GitHub package.json dynamic GitHub package.json dynamic

OWASP Juice Shop officially supports the following versions of node.js in line with the official node.js LTS schedule as close as possible. Docker images and packaged distributions are offered accordingly.

node.js Supported Tested Packaged Distributions Docker images from master Docker images from develop
22.x
21.x ( ✔️ ) ✔️ Windows (x64), MacOS (x64), Linux (x64)
20.x ✔️ ✔️ Windows (x64), MacOS (x64), Linux (x64) latest (linux/amd64, linux/arm64) snapshot (linux/amd64, linux/arm64)
20.6.0 🐛 angular/angular-cli#25782
19.x ( ✔️ )
18.x ✔️ ✔️ Windows (x64), MacOS (x64), Linux (x64)
<18.x

Juice Shop is automatically tested only on the latest .x minor version of each node.js version mentioned above! There is no guarantee that older minor node.js releases will always work with Juice Shop! Please make sure you stay up to date with your chosen version.

Troubleshooting

Gitter

If you need help with the application setup please check our our existing Troubleshooting guide. If this does not solve your issue please post your specific problem or question in the Gitter Chat where community members can best try to help you.

🛑 Please avoid opening GitHub issues for support requests or questions!

Official companion guide

Write Goodreads Review

OWASP Juice Shop comes with an official companion guide eBook. It will give you a complete overview of all vulnerabilities found in the application including hints how to spot and exploit them. In the appendix you will even find complete step-by-step solutions to every challenge. Extensive documentation of custom re-branding, CTF-support, trainer's guide and much more is also included.

Pwning OWASP Juice Shop is published under CC BY-NC-ND 4.0 and is available for free in PDF, Kindle and ePub format on LeanPub. You can also browse the full content online!

Pwning OWASP Juice Shop cover Pwning OWASP Juice Shop back cover

Contributing

GitHub contributors JavaScript Style Guide Crowdin GitHub issues by-label GitHub issues by-label

We are always happy to get new contributors on board! Please check CONTRIBUTING.md to learn how to contribute to our codebase or the translation into different languages!

References

Did you write a blog post, magazine article or do a podcast about or mentioning OWASP Juice Shop? Or maybe you held or joined a conference talk or meetup session, a hacking workshop or public training where this project was mentioned?

Add it to our ever-growing list of REFERENCES.md by forking and opening a Pull Request!

Merchandise

  • On Spreadshirt.com and Spreadshirt.de you can get some swag (Shirts, Hoodies, Mugs) with the official OWASP Juice Shop logo
  • On StickerYou.com you can get variants of the OWASP Juice Shop logo as single stickers to decorate your laptop with. They can also print magnets, iron-ons, sticker sheets and temporary tattoos.

The most honorable way to get some stickers is to contribute to the project by fixing an issue, finding a serious bug or submitting a good idea for a new challenge!

We're also happy to supply you with stickers if you organize a meetup or conference talk where you use or talk about or hack the OWASP Juice Shop! Just contact the mailing list or the project leader to discuss your plans!

Donations

The OWASP Foundation gratefully accepts donations via Stripe. Projects such as Juice Shop can then request reimbursement for expenses from the Foundation. If you'd like to express your support of the Juice Shop project, please make sure to tick the "Publicly list me as a supporter of OWASP Juice Shop" checkbox on the donation form. You can find our more about donations and how they are used here:

https://pwning.owasp-juice.shop/part3/donations.html

Contributors

The OWASP Juice Shop core project team are:

For a list of all contributors to the OWASP Juice Shop please visit our HALL_OF_FAME.md.

Licensing

license

This program is free software: you can redistribute it and/or modify it under the terms of the MIT license. OWASP Juice Shop and any contributions are Copyright © by Bjoern Kimminich & the OWASP Juice Shop contributors 2014-2024.

Juice Shop Logo

multi-juicer's People

Contributors

adrianeriksen avatar bkimminich avatar blucas-accela avatar coffeemakingtoaster avatar dependabot[bot] avatar dergut avatar fwijnholds avatar j12934 avatar jonasbg avatar jvmdc avatar michaeleischer avatar netr0m avatar nickmalcolm avatar orangecola avatar pseudobeard avatar rseedorff avatar saymolet avatar scornelissen85 avatar sharjeelaziz avatar skandix avatar stefan-schaermeli avatar stuebingerb avatar sydseter avatar troygerber avatar wurstbrot avatar zadjadr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

multi-juicer's Issues

Install in non internet environment

Hi,

I am quite new to Juicy-CTF and have problems getting it to work in no internet environment. My cluster is 2 node Kube cluster that is not connected to internet. I never used Helm before so I just installed helm rpm for RHEL 7.

Once the cluster is running, I could not find any resources as how I could get Helm to work in my environment.

  1. I cloned this repo and ran the following
    #helm install juicy-ctf ./juicy-ctf-master/helm/juicy-ctf/values.yaml
  2. This created few pods in the cluster
NAME                             READY   STATUS      RESTARTS   AGE
cleanup-job-1577419200-n9dvn     0/1     Completed   0          55m
juice-balancer-5fb5d9c77-qbktt   1/1     Running     0          61m
juicy-ctf-redis-master-0         0/1     Pending     0          61m
juicy-ctf-redis-slave-0          0/1     Pending     0          61m
  1. Redis continues to be in Pending, so on investigation
LAST SEEN   TYPE      REASON             OBJECT                                                      MESSAGE
54m         Normal    Scheduled          pod/cleanup-job-1577419200-n9dvn                            Successfully assigned default/cleanup-job-1577419200-n9dvn to s0025abl5905
54m         Normal    Pulling            pod/cleanup-job-1577419200-n9dvn                            Pulling image "iteratec/juice-cleaner:latest"
54m         Normal    Pulled             pod/cleanup-job-1577419200-n9dvn                            Successfully pulled image "iteratec/juice-cleaner:latest"
54m         Normal    Created            pod/cleanup-job-1577419200-n9dvn                            Created container cleanup-job
54m         Normal    Started            pod/cleanup-job-1577419200-n9dvn                            Started container cleanup-job
54m         Normal    SuccessfulCreate   job/cleanup-job-1577419200                                  Created pod: cleanup-job-1577419200-n9dvn
54m         Normal    SuccessfulCreate   cronjob/cleanup-job                                         Created job cleanup-job-1577419200
54m         Normal    SawCompletedJob    cronjob/cleanup-job                                         Saw completed job: cleanup-job-1577419200, status: Complete
48s         Warning   FailedScheduling   pod/juicy-ctf-redis-master-0                                error while running "VolumeBinding" filter plugin for pod "juicy-ctf-redis-master-0": pod has unbound immediate PersistentVolumeClaims
48s         Warning   FailedScheduling   pod/juicy-ctf-redis-slave-0                                 error while running "VolumeBinding" filter plugin for pod "juicy-ctf-redis-slave-0": pod has unbound immediate PersistentVolumeClaims
12s         Normal    FailedBinding      persistentvolumeclaim/redis-data-juicy-ctf-redis-master-0   no persistent volumes available for this claim and no storage class is set
12s         Normal    FailedBinding      persistentvolumeclaim/redis-data-juicy-ctf-redis-slave-0    no persistent volumes available for this claim and no storage class is set

I do not understand the message so I just deleted the juicy-ctf
helm delete juicy-ctf
I assumed the problem could be with persistent volume. So I created the volume myself redis-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: redis-data-juicy-ctf-redis-master-0
spec:
  storageClassName: manual
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /opt/appsec/redis-cluster-master-0
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: redis-data-juicy-ctf-redis-slave-0
spec:
  storageClassName: manual
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /opt/appsec/redis-cluster-slave-0

And applied the above
kubectl apply -f redis-pv.yaml
After which I created claim

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: persistentvolumeclaim/redis-data-juicy-ctf-redis-master-0
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: persistentvolumeclaim/redis-data-juicy-ctf-redis-slave-0
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

When I run to check the storage created

NAME                                  CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                              STORAGECLASS   REASON   AGE
redis-data-juicy-ctf-redis-master-0   20Gi       RWO            Retain           Bound    default/juicy-ctf-redis-master-0   manual                  53m
redis-data-juicy-ctf-redis-slave-0    20Gi       RWO            Retain           Bound    default/juicy-ctf-redis-slave-0    manual                  53m

After this if I install juicy-ctf, still the same error. I am not sure how I could fix this. Can you please help?

More ways to deploy

I am trying to set juicy-ctf up using a local on premise ubuntu server. Could there be some sort of instructions on how to set this up on pure ubuntu and use just one computer (virtualization or two docker instances perhaps)

This would probably need to include instructions on how to install kubernetes and helm to fit this method the best.

Also, as a possibility, if a docker image was added as a deployment method, that would be nice also. (have docker as a wrapper so that everything is contained inside)

Use kubernetes annotations to store instance information instead of redis

MultiJuicer currently depends on redis as a datasource.
MultiJuicer stores 3 pieces of data related to every instance in redis:

  1. Passcode: Bcrypt hash of the instances passcode
  2. LastRequest: Timestamp of the last request proxied to the instance. Update at most every 10 seconds to reduce load.
  3. ContinueCode: JuiceShop ContinueCode to backup the current Instance Progress.

All of the data could be stored directly on the instance, as annotations on the Instances kubernetes deployment or directly on the pod. This would enable to remove redis entirely. This would make the setup easier as no persistent volumes are required and removed a potential point of failure.

OpenShift 4 instance issue

Hi team,

I've installed multi-juicer on OpenShift 4 corporate instance from DockerHub and everything looks fine except it doesn't create new teams. When I check pod's log I see there following message:

time="2022-05-25T14:59:03.974Z" level="info" msg="JuiceBalancer listening on port 3000!"
time="2022-05-25T14:59:22.404Z" level="error" msg="Encountered unknown error while checking for existing JuiceShop deployment"
time="2022-05-25T14:59:22.404Z" level="error" msg="deployments.apps "t-team1-juiceshop" is forbidden: User "system:serviceaccount:cp-429991:default" cannot get resource "deployments" in API group "apps" in the namespace "default""

Is there any workaround to fix it? I can create and bind roles only within the namespace of the project.

Multi Juicer in an offline network

Multijuicer stops working within an offline network environment. The following steps were followed to install:

sudo apt install docker.io docker-compose -y
sudo snap install kubectl --classic
sudo curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
sudo curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
minikube start
kubectl cluster-info
helm repo add multi-juicer https://iteratec.github.io/multi-juicer/
helm install multi-juicer multi-juicer/multi-juicer
kubectl get pods
wget https://raw.githubusercontent.com/iteratec/multi-juicer/main/guides/k8s/k8s-juice-service.yaml
kubectl apply -f k8s-juice-service.yaml
kubectl port-forward --address 0.0.0.0 service/juice-balancer 3000:3000

When the system is brought into the isolated network, started with kubectl port-forward --address 0.0.0.0 service/juice-balancer 3000:3000 and a client attempts to connect, here is the result:

Screenshot 2023-05-20 at 3 16 28 PM

Any ideas?

Configuration Problems with Multi-Juicer

Hi,

I'm actually trying to edit the values.yaml to hide the Challenge Hints, Hacking Instructor and to upgrade the Grace Period to 10d. Therefore I deleted the # in front of showChallengeHints and showHackingInstructor in the values.yaml file. After that I tried the installation with your instruction: helm install -f values.yaml multi-juicer ./multi-juicer/helm/multi-juicer/
This command was not useful in all, because I get the following error: Error: path "./multi-juicer/helm/multi-juicer/" not found
The only way this command works if I change it to: helm install -f values.yaml multi-juicer multi-juicer/multi-juicer
If I do this, I won't be able to open a JuiceShop after I joined a team. The error that Kubernetes shows me is:

Readiness probe failed: Get "http://10.1.13.73:3000/rest/admin/application-version": dial tcp 10.1.13.73:3000: connect: connection refused
Back-off restarting failed container

(If I install it without changing the values.yaml file, it is working)

After that I tried to change the values using the --set="" command. My command was: helm install multi-juicer multi-juicer/multi-juicer --set="juiceShopCleanup.gracePeriod=10d" --set="juiceShop.config.application.showChallengeHints=false" --set="juiceShop.config.application.showHackingInstructor=false"
If i try this command, I will get this error:

 coalesce.go:199: warning: destination for config is a table. Ignoring non-table value application:
  logo: JuiceShopCTF_Logo.png
  favicon: favicon_ctf.ico
  # showChallengeHints: false
  showVersionNumber: false
  # showHackingInstructor: false
  showGitHubLinks: false
# ctf:
  # showFlagsInNotifications: true
Error: template: multi-juicer/templates/juice-shop-config-map.yaml:9:37: executing "multi-juicer/templates/juice-shop-config-map.yaml" at <4>: wrong type for value                       ; expected string; got map[string]interface {}

Can you help me to solve this problem?

Regards

Backup CodingChallenges Progess per Instance

The new CodingChallenges (both "FindIt" and "Fixit" Challenges) work in the current MultiJuicer version work but are not backed up by the progress-watchdog like normal JuiceShop challenges.

Short recap the Challenge Progress is currently "backed up" to the Deployements annotations, short example (changed the continue code so that nobody steals it 🦹):

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    multi-juicer.iteratec.dev/challengesSolved: "66"
    multi-juicer.iteratec.dev/continueCode: Q2HBuBhDtJcqIXToCnF8iPSjUKurhwt2IyTJszinfyHxuNhRTaCwFMi5fPSzUlH8Ru55hjRtVYcb4TgjF2ZiqjfVgUYWHjruv2cNYIrQTQoCzmsmqFgDSNjU4ZHo4HwmtQMczpTJvC8rslJi6KfQ3SMLUbmHOBhnnIoZsjE
    multi-juicer.iteratec.dev/lastRequest: "1633681195273"
    multi-juicer.iteratec.dev/lastRequestReadable: Fri Oct 08 2021 08:19:55 GMT+0000
      (Coordinated Universal Time)
    multi-juicer.iteratec.dev/passcode: $2a$12$XcQFCciAaJzLEXQH48qPxO3a8HMAXZ.a3iJ2mMeZ29mKI38CiGXoe
  creationTimestamp: "2021-10-08T08:07:41Z"
  generation: 1
  labels:
    app: juice-shop
    deployment-context: mj
    team: team42
  name: t-team42-juiceshop
  namespace: default
spec:
  ...

This mechanism should be extended to also back of the values used for the "continueCodeFindIt" and "continueCodeFixIt" cookies:

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    multi-juicer.iteratec.dev/challengesSolved: "66"
    multi-juicer.iteratec.dev/continueCode: Q2HBuBhDtJcqIXToCnF8iPSjUKurhwt2IyTJszinfyHxuNhRTaCwFMi5fPSzUlH8Ru55hjRtVYcb4TgjF2ZiqjfVgUYWHjruv2cNYIrQTQoCzmsmqFgDSNjU4ZHo4HwmtQMczpTJvC8rslJi6KfQ3SMLUbmHOBhnnIoZsjE
    multi-juicer.iteratec.dev/continueCodeFindIt: continueCodeFindItHere
    multi-juicer.iteratec.dev/continueCodeFixIt: continueCodeFixItHere
    multi-juicer.iteratec.dev/lastRequest: "1633681195273"
    multi-juicer.iteratec.dev/lastRequestReadable: Fri Oct 08 2021 08:19:55 GMT+0000
      (Coordinated Universal Time)
    multi-juicer.iteratec.dev/passcode: $2a$12$XcQFCciAaJzLEXQH48qPxO3a8HMAXZ.a3iJ2mMeZ29mKI38CiGXoe
  creationTimestamp: "2021-10-08T08:07:41Z"
  generation: 1
  labels:
    app: juice-shop
    deployment-context: mj
    team: team42
  name: t-team42-juiceshop
  namespace: default
spec:
  ...

Integrate a CTF Plattform

Integrate a CTF Plattform (e.g. CTFd or FBCTF) directly.

This setup should include:

  • Starting up the CTF Plattform via helm

Optional but really nice to have:

  • Automatically importing the challenges into the CTF Plattform, e.g. by using juice-shop-ctf as a libary to create the import and then importing them directly via init container / what ever to the CTF Plattform
  • Automatically submit flags to the CTF Plattform
  • Automatically create users for the CTF Plattform when the create a team in the balancer / sync teams between balancer and CTF Plattform
  • Proxy requests to the CTF Plattform so that both JuiceShop and ScoreBoard are able to run on the same URL

MultiJuicer doesn't run on microk8s without ClusterDNS enabled

Hi,

I have the problem that I can't access on a Juice Shop instance. I installed the Multi-Juicer with helm. After setting the port-forwarding in my kubernetes, I can open the start page on port 3000 of the Balancer. There I can create a new team. After that I'm clicking on "Start Hacking" and the website begins to load and load and load.... The Juice Shop won't open and I am stuck on the website which is always loading.

Here are the logs from my pods:

Juice-Shop Logs:

> [email protected] start /juice-shop
> node app

info: All dependencies in ./package.json are satisfied (OK)
info: Chatbot training data botDefaultTrainingData.json validated (OK)
info: Detected Node.js version v12.18.3 (OK)
info: Detected OS linux (OK)
info: Detected CPU x64 (OK)
info: Required file index.html is present (OK)
info: Required file styles.css is present (OK)
info: Required file main-es2018.js is present (OK)
info: Required file tutorial-es2018.js is present (OK)
info: Required file polyfills-es2018.js is present (OK)
info: Required file runtime-es2018.js is present (OK)
info: Required file vendor-es2018.js is present (OK)
info: Required file main-es5.js is present (OK)
info: Required file tutorial-es5.js is present (OK)
info: Required file polyfills-es5.js is present (OK)
info: Required file runtime-es5.js is present (OK)
info: Required file vendor-es5.js is present (OK)
info: Configuration multi-juicer validated (OK)
info: Port 3000 is available (OK)
info: Server listening on port 3000

Juice-Balancer Logs:

time="2020-12-15T13:39:20.768Z" level="info" msg="JuiceBalancer listening on port 3000!"
time="2020-12-15T13:46:10.266Z" level="info" msg="Team test doesn't have a JuiceShop deployment yet"
time="2020-12-15T13:46:10.280Z" level="info" msg="Reached 0/10 instances"
time="2020-12-15T13:46:11.402Z" level="info" msg="Creating JuiceShop Deployment for team \"test""
time="2020-12-15T13:46:11.532Z" level="info" msg="Created JuiceShop Deployment for team \"test""
time="2020-12-15T13:46:11.760Z" level="info" msg="Awaiting readiness of JuiceShop Deployment for team \"test""
time="2020-12-15T13:46:54.404Z" level="info" msg="JuiceShop Deployment for team \"test" ready"
time="2020-12-15T13:47:21.999Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T13:49:22.419Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T13:50:10.444Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T13:51:23.126Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T13:52:11.072Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T13:53:23.762Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T13:54:11.615Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T13:55:24.377Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T13:56:12.203Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T13:57:25.183Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T13:58:12.807Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T13:59:25.936Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T14:00:13.754Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T14:01:26.806Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T14:02:14.442Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T14:03:27.412Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T14:04:15.032Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T14:05:28.143Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T14:06:15.750Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T14:08:14.131Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T14:08:16.503Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"

I already updated the helm repo and reinstalled but it won't work.

Score Board loading stuck from null pointer error

  1. Launch MultiJuicer v3.3.0
  2. Create a new instance
  3. Visit the/#/score-board page of that instance
  4. Loading animation is shown endlessly which the console spills out
vendor-es2015.js:1 ERROR TypeError: Cannot read property 'showFlagsInNotifications' of null
    at h._next (main-es2015.js:formatted:7467)
    at h.__tryOrUnsub (vendor-es2015.js:1)
    at h.next (vendor-es2015.js:1)
    at c._next (vendor-es2015.js:1)
    at c.next (vendor-es2015.js:1)
    at a._next (vendor-es2015.js:1)
    at a.next (vendor-es2015.js:1)
    at a._next (vendor-es2015.js:1)
    at a.next (vendor-es2015.js:1)
    at a._next (vendor-es2015.js:1)
vn @ vendor-es2015.js:1
handleError @ vendor-es2015.js:1
next @ vendor-es2015.js:1
i @ vendor-es2015.js:1
__tryOrUnsub @ vendor-es2015.js:1
next @ vendor-es2015.js:1
_next @ vendor-es2015.js:1
next @ vendor-es2015.js:1
next @ vendor-es2015.js:1
emit @ vendor-es2015.js:1
(anonymous) @ vendor-es2015.js:1
invoke @ polyfills-es2015.js:1
run @ polyfills-es2015.js:1
runOutsideAngular @ vendor-es2015.js:1
onHandleError @ vendor-es2015.js:1
handleError @ polyfills-es2015.js:1
runTask @ polyfills-es2015.js:1
invokeTask @ polyfills-es2015.js:1
invoke @ polyfills-es2015.js:1
n.args.<computed> @ polyfills-es2015.js:1
setTimeout (async)
a @ polyfills-es2015.js:1
scheduleTask @ polyfills-es2015.js:1
onScheduleTask @ polyfills-es2015.js:1
scheduleTask @ polyfills-es2015.js:1
scheduleTask @ polyfills-es2015.js:1
scheduleMacroTask @ polyfills-es2015.js:1
u @ polyfills-es2015.js:1
(anonymous) @ polyfills-es2015.js:1
i.<computed> @ polyfills-es2015.js:1
i @ vendor-es2015.js:1
__tryOrUnsub @ vendor-es2015.js:1
next @ vendor-es2015.js:1
_next @ vendor-es2015.js:1
next @ vendor-es2015.js:1
_next @ vendor-es2015.js:1
next @ vendor-es2015.js:1
_next @ vendor-es2015.js:1
next @ vendor-es2015.js:1
_next @ vendor-es2015.js:1
next @ vendor-es2015.js:1
notifyNext @ vendor-es2015.js:1
_next @ vendor-es2015.js:1
next @ vendor-es2015.js:1
a @ vendor-es2015.js:1
invokeTask @ polyfills-es2015.js:1
onInvokeTask @ vendor-es2015.js:1
invokeTask @ polyfills-es2015.js:1
runTask @ polyfills-es2015.js:1
invokeTask @ polyfills-es2015.js:1
f @ polyfills-es2015.js:1
p @ polyfills-es2015.js:1

This is the line where the null pointer happens in Juice Shop:

this.allowRepeatNotifications = t.challenges.showSolvedNotifications && t.ctf.showFlagsInNotifications

I think this might happen due to the way the values.yaml overwrites the configuration by default: The ctf property is probably null after this config is applied, overwriting the default settings of Juice Shop itself. The fix is probably to just also comment out the line with ctf: because then I'd expect the Juice Shop's own defaults to load.

I'll send a PR with that change, but I can't really test it due to Kubernetes incompetence... :-D

Team passcode reset

We deployed multi-juicer as part of an internal security hackathon and it was great 👍

I noticed that a lot of the teams forgot to write down their passcode when creating their instance. Later on, teams wanted to switch browsers, laptops etc. and couldn't because the passcode was gone.

I think it would be very helpful to add the functionality to reset the passcode either in the admin interface or next to the team display card
https://github.com/iteratec/multi-juicer/blob/9dd8ceb7c28f35d66d150d9a2352f9f130241a6c/juice-balancer/ui/src/pages/JoinPage.js#L66-L68

Last Login IP Not working correctly

The last login ip will show the ip of the LoadBalancer not the IP of the User... 😳

Warning Spoilers:

The challenge to override the Last Login IP will most likely also not work in most cloud setups as the initial Cloud Loadbalancer will most likely strip away the X-Forwarded-For headers set by the user.

Public access deployed on Azure

I am new here and I followed the steps to deploy multi-juicer on AKS and tried port forwarding to access from my local machine. Unfortunately it doesn't work. Do I need to change anything on the Azure side to have public access?

Appreciate your help and thank you in advance

Regards,
Karthik

Improve team name validation and handling

Validation currently allows uppercase letters but the endpoint fails as uppercase letters arent supported in deployment / service names.

Maybe support more characters in teamname as we cant save the full name in the labels and use a reduced version in the deployment name

K8s autoscaling - balancer timeouts

I am running multi-juicer on Google Kubernetes Engine (GKE) and have spotted a bug when running large events.

Spinning up a new container of juice shop on a node that has capacity takes around 22s. But, when a node is at full capacity and you spin up a new container, it takes about 1m30s for the new node to become available and the container to launch. However, in that time, the balancer times out with:

GET https://training.test.appsec.tools.bbc.co.uk/balancer/teams/7tguy/wait-till-ready 502
Failed to wait for deployment readiness

You can still log out of the balancer and log back in to your team with your password, but you need to know that you need to do that, and in large team events, the unlucky few who hit this bug don't know that they need to do that.

To replicate:

  1. Operate a GKE cluster that is sized appropriately for multi-juicer i.e. the default node pool is running with very little spare CPU capacity.
  2. Launch another multi-juicer team (juice-shop container); this will force the cluster to autoscale and add a new node.
  3. Looking at the multi-juicer balancer with dev tools open, errors will be reported after about 1m.
  4. Watch the cluster resources page in GKE. The new node and associated juice-shop container should take about 1m30s to become operational.
  5. Back to the multi-juicer balancer with dev tools, the page will fail to refresh/reload and will simply fail at the page saying "Starting a new Juice Shop Instance" with the spinner spinning indefinitely.

Isolate JuiceShop Instances from each other using NetworkPolicies

Currently a user could use RCE or SSRF vulnerabilities to connect to JuiceShop instances of other users.

This would kind of be a awesome challenge in itself 😅
Like: "Steal the challenge progress from another team"

But as we (currently 😉) don't have the possibility to add new Challenges at run time it would probably be best to prohibit any traffic coming from JuiceShop to other JuiceShop pods via k8s NetworkPolicies. Might even work to prevent any cluster internal traffic from the JuiceShop this would have to be tested to ensure that this doesn't cause troubles with the juice-balancer.

Handle proxy errors properly

When a instance got deleted because of inactivity and the user visits the app, they will get a proxy error.

504 Gateway Time-out
The server didn't respond in time.

This isn't really helpful.
Better would be to redirect the user to the balancer page, with a helpful message and give them the ability to recreate the instance.

Progress Watchdog, hardcoded default Namespace

https://github.com/iteratec/multi-juicer/blob/master/progress-watchdog/main.go#L78 there is the "default" Namespace hardcoded which does not work if deployed to any other namespace than "default".

Therefore I also see permission errors (as sa, role, rolebinding in an other namespace):

panic: deployments.apps is forbidden: User "system:serviceaccount:juicy-ctf:progress-watchdog" cannot list resource "deployments" in API group "apps" in the namespace "default"
goroutine 1 [running]:
main.createProgressUpdateJobs(0xc000180420, 0xc000271760)
/src/main.go:80 +0x5da
main.main()
/src/main.go:66 +0x2ff

SecurityContext should support runAsUser

I have an issue with my PodSecurityPolicy specifying runAsNonRoot as true.

The JuiceBalancer already uses the non-root user app but Kubernetes needs a numeric id in order to verify that it is not the root user:

Error: container has runAsNonRoot and image has non-numeric user (app), cannot verify user is non-root

For this to work, the pod securityContext needs to specify runAsUser with the according user id of app.

I would love a change to either:

  1. include the numeric id of the app user in the template:
-- helm/multi-juicer/templates/juice-balancer-deployment.yaml

securityContext:
  runAsUser: 100
  runAsGroup: 101
  1. or include a template variable for the pod securityContext, so that users can modify it themselves:
-- helm/multi-juicer/templates/juice-balancer-deployment.yaml

{{- if .Values.balancer.securityContext }}
securityContext:
  {{ toYaml .Values.balancer.securityContext | indent 8 }}
{{- end }}
-- helm/multi-juicer/values.yaml

balancer:
  securityContext: {}

Openshift instance issue

Hi Team,

I have deployed the application in our Enterprise Openshift and I am able to login as admin also. but when I am trying to create a new team, I get "Internal Server Error". I have already tried reproduce the issue by deploying it in different public cloud environments, everywhere the issue is same. Kindly help.

Also it looks like, the repo - https://iteratec.github.io/multi-juicer/ doesn't exist, please check.

Regards,
Sashi
image

Run End2End test inside kubernetes

Using github actions we could run our E2E tests against proper kubernetes cluster created using tools like kind.

Ideally this would run the tests against a couple of k8s version. (Something the the last 4 maybe?) When running parallel this should be too slow.

The E2E tests are currently a bit neglected and don't pass properly 😞
Partly because the are not executed automatically

connectionCache

The checkIfInstanceIsUp function in proxy.js uses req.cleanedTeamname to check for the team in the cache. The updateLastConnectTimestamp function uses req.teamname to add the team to the cache. I added some logging and noticed that the checkIfInstanceIsUp function never finds the team in the cache and always calls getJuiceShopInstanceForTeamname(teamname) which adds quite a bit of time to every request. updateLastConnectTimestamp seems to add t-teamname and checkIfInstanceIsUp checks just for teamname. Both functions should use the same version of the teamname for the cache.

Add "reset team passcodes" button to admin page

It would be nice if the token for a team on multi-juicer, could be stored in the metadata regarding the pod. So if everyone in the team forgot their team pincode, they could ask admins of the cluster to recover it by checking the metadata for the team pod.

Like one can for the admin password, as seen in the attached picture.
But have the option to attach team pincode to its pod.

image

After click on "Start Hacking" only forwarded to the start page (with new team-name)

Hello,

I installed the latest version on a Kubernetes cluster. External accessibility with the help of the load balancer is not a problem at all.

As soon as I enter a team name, I get to the page with the access code and a message that the shop is starting up. This takes about 10-15 seconds and the button (Start Hacking) appears for the supposed juice shop. As soon as I click on it, I come back to the start page, where I can select a team name again.

If I use the same team name again, it correctly asks for the access code. But even here I only land on the start page.

If I use the "Admin" team, I get to the admin overview. Here I am told that there are no teams yet.

In Kubernetes, I see the team's matching pod.

What could be the reason?

New instances are failing with "multi-user-redis"

This used to work fine but now I am seeing this issue. Any ideas?

Events:
  Type     Reason     Age                From                   Message
  ----     ------     ----               ----                   -------
  Normal   Scheduled  36s                default-scheduler      Successfully assigned default/t-val-juiceshop-79b6488494-bftsr to s0025abl5905
  Normal   Pulled     35s                kubelet, s0025abl5905  Container image "bkimminich/juice-shop:latest" already present on machine
  Normal   Created    35s                kubelet, s0025abl5905  Created container juice-shop
  Normal   Started    34s                kubelet, s0025abl5905  Started container juice-shop
  Normal   Pulling    15s (x3 over 34s)  kubelet, s0025abl5905  Pulling image "iteratec/juice-progress-watchdog"
  Normal   Pulled     12s (x3 over 31s)  kubelet, s0025abl5905  Successfully pulled image "iteratec/juice-progress-watchdog"
  Warning  Failed     12s (x3 over 31s)  kubelet, s0025abl5905  Error: secret "multi-juicer-redis" not found
  Warning  Unhealthy  9s (x11 over 29s)  kubelet, s0025abl5905  Readiness probe failed: Get http://10.244.1.121:3000/rest/admin/application-version: dial tcp 10.244.1.121:3000: connect: connection refused
[chandv3_adm@sdcnjc ~]$ curl http://10.244.1.121:3000/rest/admin/application-version
{"version":"9.3.1"}

Application Level Prometheus Metrics and custom Grafana Dashboard

Add prometheus metrics for information application level information like:

  • Number of registered teams
  • Number of logins (successful, failed)
  • Number of deleted instances
  • Number of challenges solved (Requires #27)
  • Http Response Codes (per path)[for Balancer and Proxied Requests]
  • Http Request Latency from JuiceShop (per path)[for Balancer and Proxied Requests]
  • Number of Requests requests proxied to the instances

The metrics endpoint should not be located at the usual /metrics but /balancer/metrics so that potential new JuiceShop Challenges(juice-shop/juice-shop#1275) are not hindered by this 😉.

A option should be added to the Helm Chart to enable / disable the endpoint.

The Helm Chart should provide a option to deploy a prometheus-operator ServiceMonitor for the /balancer/metrics endpoint.

The /balancer/metrics should also be secured using http basic auth. The credentials for it should be located in a kubernetes secrets to make it accessible to both the balancer and the ServiceMonitor.

To provide a baseline dashboard for the new metrics a grafana dashboard should be created and placed inside a specially labelled ConfigMap which can be picked up by the Grafana Sidecar: https://github.com/helm/charts/tree/master/stable/grafana#import-dashboards

Publish MultiJuicer Terraform Modules for individual cloud providers

Terraform allows to distribute reusable modules which setup cloud infrastructure.
This could be used to provide "ready to go" MultiJuicer setups for different cloud setups which can be installed very easily.

These Modules should setup:

  1. kubernetes setup
  2. install MultiJuicer helm chart
  3. install all components required to direct traffic to the MultiJuicer balancer.

To publish the modules we'd probably have to setups individual repositories per cloud provider.
https://www.terraform.io/docs/modules/index.html

Issues when trying to customize shop with values.yml

Hi there,

I'm trying to build a custom shop look for an internal CTF using juice shop, i've been able to modify the shop title and logo modifying it in values.yml but when I try to add the "products" section on it, the juice shop instance will not load.

I'm using multi-juicer with kubernetes with the following setup:

minikube start
helm install multi-juicer multi-juicer/multi-juicer -f multi-juicer/helm/multi-juicer/test.yml

I've also tried using one of the custom yaml provided in the juice shop guide (https://pwning.owasp-juice.shop/part1/customization.html), (tried the Mozilla one) but it will just load the default juice shop looks.

Any help would be appreciated.

Template variables could not be initialized: Datasource named Prometheus was not found

Hi,

i have a problem with setting up the Multi-Juicer with a monitoring setup. I have followed the guides for the monitoring setup and installed everything.
I installed Grafana with the commands (https://github.com/grafana/helm-charts/tree/main/charts/grafana) and set the attribut sidecar.datasource.enabled to true in the values.yaml file from Grafana.
Then I called Grafana in the browser to view the dashboard. The problem is that firstly the error message "Template variables could not be initialized: Datasource named Prometheus was not found" appears and secondly the data in the dashboard is completely random and does not reflect statistics of any Juice Shop instance.
Do you know how to solve this problem? Basically, I just followed the guides and ended up with the error message.

Scoreboard / Trainer Dashboard

To give a overview on which team already solved particular challenges, MultiJuicer(JuicyCTF 😉) should provide a scoreboard.
The scoreboard is meant for less competitive events and should be focused on providing a helpful overview for the trainer(s) and a fun overview for the players.

This scoreboard should be able to display the following things:

  • Challenge List Page: Show a list of all challenges and show how many users have solved the challenge
  • Challenge Detail Page: See which users have solved a particular challenge
  • (Optional) Category Overview: List all challenges of a category (e.g. XSS) and how many users have solved them.

It might also be nice to add an option to use a more competitive mode. Competition can be fun even during training. For that a ranking could be added with the following pages:

  • Ranking Page: Ranking of users. Each challenge should give points (or maybe just stars) based on its difficulty. The players are then ranked on how many points they have earned
  • Team Detail Page: Show all solved challenges of a single team

Integration Test Suite

Should have a test suite to test:

  • registering as a new user
  • login into a already existing team

Should run juice-balancer in a somewhat proper cluster.

Maybe using k3s?

Or spinning up a new test cluster somewhere?

New Logo

The current logo was created pretty hastily.

Would be nice to have a new and better logo which has:

  • can be printed on stickers 😉
  • has a similar graphic style to the JuiceShop icon

General Ideas are to have a mixer / blender spilling over with all the juice it has inside.

Problem with proxing on local server

Hi,
I've got a certain problem. After creation of team i can't access newly created pod (after pressing Start Hacking I return to creating new team). This issue only occures when I'm using my own config file. I changed only secure: false to secure: true, because i want to launch it on our production server. Do you know what may be the problem?

Issue deploying mulit-juicer

Helm Version:
Client: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}

I'm trying to deploy multi-juicer with the use of this command

helm install -f values.yaml multi-juicer ./multi-juicer/helm/multi-juicer

But I get the error, Error: This command needs 1 argument: chart name

Renaming JuicyCTF

Twitter poll results for renaming JuicyCTF with the goal if disambiguation from juice-shop-ctf NPM module:

Did any better ideas than "MultiJuicer" come up in the meantime?

Why are successfulJobsHistoryLimit and failedJobsHistoryLimit set to 10?

Both successfulJobsHistoryLimit and failedJobsHistoryLimit are set by default to 10 which seems to consume resources in K8s (100mCPU per process). Is there a reason for setting them to 10 and having the process persist after it has completed or is this something I can reduce to a much lower value without any knock on effects or is 10 specified for a reason?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.