## Notes
Upgrading a Helm Chart may cause volumes to get lost. Need to figure out a way to retain volumes on upgrades.
## Options
- Create a **PersistentVolume** with the **ReadWriteMany** *access mode*
- Set Deployment strategy to **Recreate**
- All old Pods are deleted before any new ones are spun up
- This will create some downtime, but should satisfy the requirement
## Separate PV and PVC Definition
### PV Definition
```yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pvc-myapplication-0
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
cinder:
fsType: ext3
volumeID: b98a2233-a12c-4835-8db7-9f7371495a21
storageClassName: myapplication
```
### PVC Definition
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myapplication-pvc
namespace: mynamespace
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: myapplication
volumeMode: Filesystem
volumeName: pvc-myapplication-0
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
```
### K8s Values
```yaml
...
persistence:
type: pvc
enabled: true
_# annotations: {}_ finalizers:
- kubernetes.io/pvc-protection
existingClaim: myapplication-pvc
```
## References
- [How to Perform Helm Update on Deployment with PVC and Initcontainer - StackOverflow](https://stackoverflow.com/questions/62536939/how-to-perform-helm-update-on-deployment-with-pvc-and-initcontainer)
- [# Reusable Persistent Volumes with the existingClaim option for the Grafana & Prometheus-Operator Helm Charts - Medium](https://medium.com/@kevincoakley/reusable-persistent-volumes-with-the-existingclaim-option-for-the-grafana-prometheus-operator-84568b96315)