https://github.com/plexinc/pms-docker/tree/master/charts/plex-media-server
➜ helm git:(main) helm repo add plex
https://raw.githubusercontent.com/plexinc/pms-docker/gh-pages
"plex" already exists with the same configuration, skipping
➜ helm git:(main) helm show values plex/plex-media-server > values.yaml
➜ helm git:(main) helm upgrade --install plex plex/plex-media-server --values values.yaml
Release "plex" does not exist. Installing it now.
NAME: plex
LAST DEPLOYED: Sat May 18 10:41:13 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing plex-media-server.
Your release is named plex.
To learn more about the release, try:
$ helm status plex
$ helm get all plex
Mount path for remote is available at
You can use this as a volume within your other pods to view the file system for your remote.
➜ helm git:(main) ✗ kubectl port-forward plex-plex-media-server-0 32400:32400
Forwarding from 127.0.0.1:32400 -> 32400
Forwarding from [::1]:32400 -> 32400
Handling connection for 32400
volumes and stuff
volumes:
{{- if .Values.pms.configExistingClaim }}
- name: pms-config
persistentVolumeClaim:
claimName: {{ .Values.pms.configExistingClaim | quote }}
{{- end }}
- name: pms-transcode
emptyDir: {}
...
{{- if .Values.extraVolumes }}
{{ toYaml .Values.extraVolumes | indent 6 }}
{{- end }}
containers:
- name: {{ include "pms-chart.fullname" . }}-pms
...
volumeMounts:
- name: pms-config
mountPath: /config
- name: pms-transcode
mountPath: /transcode
...
{{- if .Values.extraVolumeMounts }}
{{ toYaml .Values.extraVolumeMounts | indent 8 }}
{{- end }}
stackoverflow reference
The persistent volume claim… pvc.yaml:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
# This name uniquely identifies the PVC. This is used in deployment.
name: public-pv-claim
namespace: default
spec:
accessModes:
- ReadWriteMany # All nodes have read/write access to the volume
resources:
# This is the request for storage. Should be available in the cluster.
requests:
storage: 1Gi
and in the specific node that should be allowed to write to the volume container_write_access_to_pv.yaml:
...
volumes:
- name: public
# This volume is based on PVC
persistentVolumeClaim:
# Name of the PVC created earlier
claimName: public-pv-claim
containers:
- name: specific
# Volume mounts for this container
volumeMounts:
# Volume is mounted to path '/public'
- name: data
mountPath: "/public"
...
and for pods of other nodes that should have read only access: container_with_read_only_access_to_pv.yaml:
...
volumes:
- name: public
# This volume is based on PVC
persistentVolumeClaim:
# Name of the PVC created earlier
claimName: public-pv-claim
containers:
- name: other
...
volumeMounts:
- name: public
# Volume is mounted to path '/public' in read-only mode
mountPath: "/public"
readOnly: true
...