Chart version: 3.0.2
Api version: v2
App version: v2.26.0
Usenet meta search
Chart Type
Set me up:
helm repo add center https://repo.chartcenter.io
Install Chart:
helm install nzbhydra2 center/k8s-at-home/nzbhydra2
Versions (0)


This is a helm chart for nzbhydra2 leveraging the Linuxserver.io image


$ helm repo add k8s-at-home https://k8s-at-home.com/charts/
$ helm install k8s-at-home/nzbhydra2

Installing the Chart

To install the chart with the release name my-release:

helm install --name my-release k8s-at-home/nzbhydra2

Uninstalling the Chart

To uninstall/delete the my-release deployment:

helm delete my-release --purge

The command removes all the Kubernetes components associated with the chart and deletes the release.


The following tables lists the configurable parameters of the Sentry chart and their default values.

Parameter Description Default
image.repository Image repository linuxserver/hydra2
image.tag Image tag. Possible values listed here. v2.22.2-ls9
image.pullPolicy Image pull policy IfNotPresent
strategyType Specifies the strategy used to replace old Pods by new ones Recreate
timezone Timezone the nzbhydra2 instance should run as, e.g. ‘America/New_York’ UTC
puid process userID the nzbhydra2 instance should run as 1001
pgid process groupID the nzbhydra2 instance should run as 1001
probes.liveness.failureThreshold Specify liveness failureThreshold parameter for the deployment 5
probes.liveness.periodSeconds Specify liveness periodSeconds parameter for the deployment 10
probes.readiness.failureThreshold Specify readiness failureThreshold parameter for the deployment 5
probes.readiness.periodSeconds Specify readiness periodSeconds parameter for the deployment 10
probes.startup.initialDelaySeconds Specify startup initialDelaySeconds parameter for the deployment 5
probes.startup.failureThreshold Specify startup failureThreshold parameter for the deployment 30
probes.startup.periodSeconds Specify startup periodSeconds parameter for the deployment 10
service.type Kubernetes service type for the nzbhydra2 GUI ClusterIP
service.port Kubernetes port where the nzbhydra2 GUI is exposed 5076
service.annotations Service annotations for the nzbhydra2 GUI {}
service.labels Custom labels {}
service.loadBalancerIP Loadbalance IP for the nzbhydra2 GUI {}
service.loadBalancerSourceRanges List of IP CIDRs allowed access to load balancer (if supported) None
ingress.enabled Enables Ingress false
ingress.annotations Ingress annotations {}
ingress.labels Custom labels {}
ingress.path Ingress path /
ingress.hosts Ingress accepted hostnames chart-example.local
ingress.tls Ingress TLS configuration []
persistence.config.enabled Use persistent volume to store configuration data true
persistence.config.size Size of persistent volume claim 1Gi
persistence.config.existingClaim Use an existing PVC to persist data nil
persistence.config.subPath Mount a sub directory of the persistent volume if set ""
persistence.config.storageClass Type of persistent volume claim -
persistence.config.accessMode Persistence access mode ReadWriteOnce
persistence.config.skipuninstall Do not delete the pvc upon helm uninstall false
resources CPU/Memory resource requests/limits {}
nodeSelector Node labels for pod assignment {}
tolerations Toleration labels for pod assignment []
affinity Affinity settings for pod assignment {}
podAnnotations Key-value pairs to add as pod annotations {}
deploymentAnnotations Key-value pairs to add as deployment annotations {}

Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

helm install --name my-release \
  --set timezone="America/New York" \

Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,

helm install --name my-release -f values.yaml k8s-at-home/nzbhydra2


If you get Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: ... it may be because you uninstalled the chart with skipuninstall enabled, you need to manually delete the pvc or use existingClaim.

Read through the values.yaml file. It has several commented out suggested values.