1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
|
<!--- app-name: MongoDB® -->
# MongoDB(R) packaged by Bitnami
MongoDB(R) is a relational open source NoSQL database. Easy to use, it stores data in JSON-like documents. Automated scalability and high-performance. Ideal for developing cloud native applications.
[Overview of MongoDB®](http://www.mongodb.org)
Disclaimer: The respective trademarks mentioned in the offering are owned by the respective companies. We do not provide a commercial license for any of these products. This listing has an open-source license. MongoDB(R) is run and maintained by MongoDB, which is a completely separate project from Bitnami.
## TL;DR
```console
helm install my-release oci://registry-1.docker.io/bitnamicharts/mongodb
```
Looking to use MongoDBreg; in production? Try [VMware Tanzu Application Catalog](https://bitnami.com/enterprise), the enterprise edition of Bitnami Application Catalog.
## Introduction
This chart bootstraps a [MongoDB(®)](https://github.com/bitnami/containers/tree/main/bitnami/mongodb) deployment on a [Kubernetes](https://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
Bitnami charts can be used with [Kubeapps](https://kubeapps.dev/) for deployment and management of Helm Charts in clusters.
## Prerequisites
- Kubernetes 1.23+
- Helm 3.8.0+
- PV provisioner support in the underlying infrastructure
## Installing the Chart
To install the chart with the release name `my-release`:
```console
helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/mongodb
```
> Note: You need to substitute the placeholders `REGISTRY_NAME` and `REPOSITORY_NAME` with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use `REGISTRY_NAME=registry-1.docker.io` and `REPOSITORY_NAME=bitnamicharts`.
The command deploys MongoDB(®) on the Kubernetes cluster in the default configuration. The [Parameters](#parameters) section lists the parameters that can be configured during installation.
> **Tip**: List all releases using `helm list`
## Uninstalling the Chart
To uninstall/delete the `my-release` deployment:
```console
helm delete my-release
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Architecture
This chart allows installing MongoDB(®) using two different architecture setups: `standalone` or `replicaset`. Use the `architecture` parameter to choose the one to use:
```console
architecture="standalone"
architecture="replicaset"
```
### Standalone architecture
The *standalone* architecture installs a deployment (or StatefulSet) with one MongoDB® server (it cannot be scaled):
```text
----------------
| MongoDB® |
| svc |
----------------
|
v
------------
|MongoDB®|
| Server |
| Pod |
-----------
```
### Replicaset architecture
The chart also supports the *replicaset* architecture with and without a MongoDB(®) Arbiter:
When the MongoDB(®) Arbiter is enabled, the chart installs two StatefulSets: A StatefulSet with N MongoDB(®) servers (organised with one primary and N-1 secondary nodes), and a StatefulSet with one MongoDB(®) arbiter node (it cannot be scaled).
```text
---------------- ---------------- ---------------- -------------
| MongoDB® 0 | | MongoDB® 1 | | MongoDB® N | | Arbiter |
| external svc | | external svc | | external svc | | svc |
---------------- ---------------- ---------------- -------------
| | | |
v v v v
---------------- ---------------- ---------------- --------------
| MongoDB® 0 | | MongoDB® 1 | | MongoDB® N | | MongoDB® |
| Server | | Server | | Server | | Arbiter |
| Pod | | Pod | | Pod | | Pod |
---------------- ---------------- ---------------- --------------
primary secondary secondary
```
The PSA model is useful when the third Availability Zone cannot hold a full MongoDB(®) instance. The MongoDB(®) Arbiter as decision maker is lightweight and can run alongside other workloads.
> NOTE: An update takes your MongoDB(®) replicaset offline if the Arbiter is enabled and the number of MongoDB(®) replicas is two. Helm applies updates to the StatefulSets for the MongoDB(®) instance and the Arbiter at the same time so you lose two out of three quorum votes.
Without the Arbiter, the chart deploys a single statefulset with N MongoDB(®) servers (organised with one primary and N-1 secondary nodes).
```text
---------------- ---------------- ----------------
| MongoDB® 0 | | MongoDB® 1 | | MongoDB® N |
| external svc | | external svc | | external svc |
---------------- ---------------- ----------------
| | |
v v v
---------------- ---------------- ----------------
| MongoDB® 0 | | MongoDB® 1 | | MongoDB® N |
| Server | | Server | | Server |
| Pod | | Pod | | Pod |
---------------- ---------------- ----------------
primary secondary secondary
```
There are no services load balancing requests between MongoDB(®) nodes; instead, each node has an associated service to access them individually.
> NOTE: Although the first replica is initially assigned the primary role, any of the secondary nodes can become the primary if it is down, or during upgrades. Do not make any assumption about what replica has the primary role. Instead, configure your MongoDB(®) client with the list of MongoDB(®) hostnames so it can dynamically choose the node to send requests.
## Parameters
### Global parameters
| Name | Description | Value |
| -------------------------- | ---------------------------------------------------------------------------------------------------------------------- | ----- |
| `global.imageRegistry` | Global Docker image registry | `""` |
| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` |
| `global.storageClass` | Global StorageClass for Persistent Volume(s) | `""` |
| `global.namespaceOverride` | Override the namespace for resource deployed by the chart, but can itself be overridden by the local namespaceOverride | `""` |
### Common parameters
| Name | Description | Value |
| ------------------------- | --------------------------------------------------------------------------------------------------------- | --------------- |
| `nameOverride` | String to partially override mongodb.fullname template (will maintain the release name) | `""` |
| `fullnameOverride` | String to fully override mongodb.fullname template | `""` |
| `namespaceOverride` | String to fully override common.names.namespace | `""` |
| `kubeVersion` | Force target Kubernetes version (using Helm capabilities if not set) | `""` |
| `clusterDomain` | Default Kubernetes cluster domain | `cluster.local` |
| `extraDeploy` | Array of extra objects to deploy with the release | `[]` |
| `commonLabels` | Add labels to all the deployed resources (sub-charts are not considered). Evaluated as a template | `{}` |
| `commonAnnotations` | Common annotations to add to all Mongo resources (sub-charts are not considered). Evaluated as a template | `{}` |
| `topologyKey` | Override common lib default topology key. If empty - "kubernetes.io/hostname" is used | `""` |
| `serviceBindings.enabled` | Create secret for service binding (Experimental) | `false` |
| `enableServiceLinks` | Whether information about services should be injected into pod's environment variable | `true` |
| `diagnosticMode.enabled` | Enable diagnostic mode (all probes will be disabled and the command will be overridden) | `false` |
| `diagnosticMode.command` | Command to override all containers in the deployment | `["sleep"]` |
| `diagnosticMode.args` | Args to override all containers in the deployment | `["infinity"]` |
### MongoDB(®) parameters
| Name | Description | Value |
| -------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------- |
| `image.registry` | MongoDB(®) image registry | `REGISTRY_NAME` |
| `image.repository` | MongoDB(®) image registry | `REPOSITORY_NAME/mongodb` |
| `image.digest` | MongoDB(®) image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | `""` |
| `image.pullPolicy` | MongoDB(®) image pull policy | `IfNotPresent` |
| `image.pullSecrets` | Specify docker-registry secret names as an array | `[]` |
| `image.debug` | Set to true if you would like to see extra information on logs | `false` |
| `schedulerName` | Name of the scheduler (other than default) to dispatch pods | `""` |
| `architecture` | MongoDB(®) architecture (`standalone` or `replicaset`) | `standalone` |
| `useStatefulSet` | Set to true to use a StatefulSet instead of a Deployment (only when `architecture=standalone`) | `false` |
| `auth.enabled` | Enable authentication | `true` |
| `auth.rootUser` | MongoDB(®) root user | `root` |
| `auth.rootPassword` | MongoDB(®) root password | `""` |
| `auth.usernames` | List of custom users to be created during the initialization | `[]` |
| `auth.passwords` | List of passwords for the custom users set at `auth.usernames` | `[]` |
| `auth.databases` | List of custom databases to be created during the initialization | `[]` |
| `auth.username` | DEPRECATED: use `auth.usernames` instead | `""` |
| `auth.password` | DEPRECATED: use `auth.passwords` instead | `""` |
| `auth.database` | DEPRECATED: use `auth.databases` instead | `""` |
| `auth.replicaSetKey` | Key used for authentication in the replicaset (only when `architecture=replicaset`) | `""` |
| `auth.existingSecret` | Existing secret with MongoDB(®) credentials (keys: `mongodb-passwords`, `mongodb-root-password`, `mongodb-metrics-password`, `mongodb-replica-set-key`) | `""` |
| `tls.enabled` | Enable MongoDB(®) TLS support between nodes in the cluster as well as between mongo clients and nodes | `false` |
| `tls.mTLS.enabled` | IF TLS support is enabled, require clients to provide certificates | `true` |
| `tls.autoGenerated` | Generate a custom CA and self-signed certificates | `true` |
| `tls.existingSecret` | Existing secret with TLS certificates (keys: `mongodb-ca-cert`, `mongodb-ca-key`) | `""` |
| `tls.caCert` | Custom CA certificated (base64 encoded) | `""` |
| `tls.caKey` | CA certificate private key (base64 encoded) | `""` |
| `tls.pemChainIncluded` | Flag to denote that the Certificate Authority (CA) certificates are bundled with the endpoint cert. | `false` |
| `tls.standalone.existingSecret` | Existing secret with TLS certificates (`tls.key`, `tls.crt`, `ca.crt`) or (`tls.key`, `tls.crt`) with tls.pemChainIncluded set as enabled. | `""` |
| `tls.replicaset.existingSecrets` | Array of existing secrets with TLS certificates (`tls.key`, `tls.crt`, `ca.crt`) or (`tls.key`, `tls.crt`) with tls.pemChainIncluded set as enabled. | `[]` |
| `tls.hidden.existingSecrets` | Array of existing secrets with TLS certificates (`tls.key`, `tls.crt`, `ca.crt`) or (`tls.key`, `tls.crt`) with tls.pemChainIncluded set as enabled. | `[]` |
| `tls.arbiter.existingSecret` | Existing secret with TLS certificates (`tls.key`, `tls.crt`, `ca.crt`) or (`tls.key`, `tls.crt`) with tls.pemChainIncluded set as enabled. | `""` |
| `tls.image.registry` | Init container TLS certs setup image registry | `REGISTRY_NAME` |
| `tls.image.repository` | Init container TLS certs setup image repository | `REPOSITORY_NAME/nginx` |
| `tls.image.digest` | Init container TLS certs setup image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | `""` |
| `tls.image.pullPolicy` | Init container TLS certs setup image pull policy | `IfNotPresent` |
| `tls.image.pullSecrets` | Init container TLS certs specify docker-registry secret names as an array | `[]` |
| `tls.extraDnsNames` | Add extra dns names to the CA, can solve x509 auth issue for pod clients | `[]` |
| `tls.mode` | Allows to set the tls mode which should be used when tls is enabled (options: `allowTLS`, `preferTLS`, `requireTLS`) | `requireTLS` |
| `tls.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if tls.resources is set (tls.resources is recommended for production). | `none` |
| `tls.resources` | Set container requests and limits for different resources like CPU or memory (essential for production workloads) | `{}` |
| `tls.securityContext` | Init container generate-tls-cert Security context | `{}` |
| `automountServiceAccountToken` | Mount Service Account token in pod | `false` |
| `hostAliases` | Add deployment host aliases | `[]` |
| `replicaSetName` | Name of the replica set (only when `architecture=replicaset`) | `rs0` |
| `replicaSetHostnames` | Enable DNS hostnames in the replicaset config (only when `architecture=replicaset`) | `true` |
| `enableIPv6` | Switch to enable/disable IPv6 on MongoDB(®) | `false` |
| `directoryPerDB` | Switch to enable/disable DirectoryPerDB on MongoDB(®) | `false` |
| `systemLogVerbosity` | MongoDB(®) system log verbosity level | `0` |
| `disableSystemLog` | Switch to enable/disable MongoDB(®) system log | `false` |
| `disableJavascript` | Switch to enable/disable MongoDB(®) server-side JavaScript execution | `false` |
| `enableJournal` | Switch to enable/disable MongoDB(®) Journaling | `true` |
| `configuration` | MongoDB(®) configuration file to be used for Primary and Secondary nodes | `""` |
### replicaSetConfigurationSettings settings applied during runtime (not via configuration file)
| Name | Description | Value |
| ----------------------------------------------- | --------------------------------------------------------------------------------------------------- | ------- |
| `replicaSetConfigurationSettings.enabled` | Enable MongoDB(®) Switch to enable/disable configuring MongoDB(®) run time rs.conf settings | `false` |
| `replicaSetConfigurationSettings.configuration` | run-time rs.conf settings | `{}` |
| `existingConfigmap` | Name of existing ConfigMap with MongoDB(®) configuration for Primary and Secondary nodes | `""` |
| `initdbScripts` | Dictionary of initdb scripts | `{}` |
| `initdbScriptsConfigMap` | Existing ConfigMap with custom initdb scripts | `""` |
| `command` | Override default container command (useful when using custom images) | `[]` |
| `args` | Override default container args (useful when using custom images) | `[]` |
| `extraFlags` | MongoDB(®) additional command line flags | `[]` |
| `extraEnvVars` | Extra environment variables to add to MongoDB(®) pods | `[]` |
| `extraEnvVarsCM` | Name of existing ConfigMap containing extra env vars | `""` |
| `extraEnvVarsSecret` | Name of existing Secret containing extra env vars (in case of sensitive data) | `""` |
### MongoDB(®) statefulset parameters
| Name | Description | Value |
| --------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------- |
| `annotations` | Additional labels to be added to the MongoDB(®) statefulset. Evaluated as a template | `{}` |
| `labels` | Annotations to be added to the MongoDB(®) statefulset. Evaluated as a template | `{}` |
| `replicaCount` | Number of MongoDB(®) nodes | `2` |
| `updateStrategy.type` | Strategy to use to replace existing MongoDB(®) pods. When architecture=standalone and useStatefulSet=false, | `RollingUpdate` |
| `podManagementPolicy` | Pod management policy for MongoDB(®) | `OrderedReady` |
| `podAffinityPreset` | MongoDB(®) Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `podAntiAffinityPreset` | MongoDB(®) Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `soft` |
| `nodeAffinityPreset.type` | MongoDB(®) Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `nodeAffinityPreset.key` | MongoDB(®) Node label key to match Ignored if `affinity` is set. | `""` |
| `nodeAffinityPreset.values` | MongoDB(®) Node label values to match. Ignored if `affinity` is set. | `[]` |
| `affinity` | MongoDB(®) Affinity for pod assignment | `{}` |
| `nodeSelector` | MongoDB(®) Node labels for pod assignment | `{}` |
| `tolerations` | MongoDB(®) Tolerations for pod assignment | `[]` |
| `topologySpreadConstraints` | MongoDB(®) Spread Constraints for Pods | `[]` |
| `lifecycleHooks` | LifecycleHook for the MongoDB(®) container(s) to automate configuration before or after startup | `{}` |
| `terminationGracePeriodSeconds` | MongoDB(®) Termination Grace Period | `""` |
| `podLabels` | MongoDB(®) pod labels | `{}` |
| `podAnnotations` | MongoDB(®) Pod annotations | `{}` |
| `priorityClassName` | Name of the existing priority class to be used by MongoDB(®) pod(s) | `""` |
| `runtimeClassName` | Name of the runtime class to be used by MongoDB(®) pod(s) | `""` |
| `podSecurityContext.enabled` | Enable MongoDB(®) pod(s)' Security Context | `true` |
| `podSecurityContext.fsGroupChangePolicy` | Set filesystem group change policy | `Always` |
| `podSecurityContext.supplementalGroups` | Set filesystem extra groups | `[]` |
| `podSecurityContext.fsGroup` | Group ID for the volumes of the MongoDB(®) pod(s) | `1001` |
| `podSecurityContext.sysctls` | sysctl settings of the MongoDB(®) pod(s)' | `[]` |
| `containerSecurityContext.enabled` | Enabled containers' Security Context | `true` |
| `containerSecurityContext.seLinuxOptions` | Set SELinux options in container | `nil` |
| `containerSecurityContext.runAsUser` | Set containers' Security Context runAsUser | `1001` |
| `containerSecurityContext.runAsGroup` | Set containers' Security Context runAsGroup | `0` |
| `containerSecurityContext.runAsNonRoot` | Set container's Security Context runAsNonRoot | `true` |
| `containerSecurityContext.privileged` | Set container's Security Context privileged | `false` |
| `containerSecurityContext.readOnlyRootFilesystem` | Set container's Security Context readOnlyRootFilesystem | `false` |
| `containerSecurityContext.allowPrivilegeEscalation` | Set container's Security Context allowPrivilegeEscalation | `false` |
| `containerSecurityContext.capabilities.drop` | List of capabilities to be dropped | `["ALL"]` |
| `containerSecurityContext.seccompProfile.type` | Set container's Security Context seccomp profile | `RuntimeDefault` |
| `resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `none` |
| `resources` | Set container requests and limits for different resources like CPU or memory (essential for production workloads) | `{}` |
| `containerPorts.mongodb` | MongoDB(®) container port | `27017` |
| `livenessProbe.enabled` | Enable livenessProbe | `true` |
| `livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `30` |
| `livenessProbe.periodSeconds` | Period seconds for livenessProbe | `20` |
| `livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `10` |
| `livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `6` |
| `livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` |
| `readinessProbe.enabled` | Enable readinessProbe | `true` |
| `readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `5` |
| `readinessProbe.periodSeconds` | Period seconds for readinessProbe | `10` |
| `readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `5` |
| `readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `6` |
| `readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` |
| `startupProbe.enabled` | Enable startupProbe | `false` |
| `startupProbe.initialDelaySeconds` | Initial delay seconds for startupProbe | `5` |
| `startupProbe.periodSeconds` | Period seconds for startupProbe | `20` |
| `startupProbe.timeoutSeconds` | Timeout seconds for startupProbe | `10` |
| `startupProbe.failureThreshold` | Failure threshold for startupProbe | `30` |
| `startupProbe.successThreshold` | Success threshold for startupProbe | `1` |
| `customLivenessProbe` | Override default liveness probe for MongoDB(®) containers | `{}` |
| `customReadinessProbe` | Override default readiness probe for MongoDB(®) containers | `{}` |
| `customStartupProbe` | Override default startup probe for MongoDB(®) containers | `{}` |
| `initContainers` | Add additional init containers for the hidden node pod(s) | `[]` |
| `sidecars` | Add additional sidecar containers for the MongoDB(®) pod(s) | `[]` |
| `extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for the MongoDB(®) container(s) | `[]` |
| `extraVolumes` | Optionally specify extra list of additional volumes to the MongoDB(®) statefulset | `[]` |
| `pdb.create` | Enable/disable a Pod Disruption Budget creation for MongoDB(®) pod(s) | `false` |
| `pdb.minAvailable` | Minimum number/percentage of MongoDB(®) pods that must still be available after the eviction | `1` |
| `pdb.maxUnavailable` | Maximum number/percentage of MongoDB(®) pods that may be made unavailable after the eviction | `""` |
### Traffic exposure parameters
| Name | Description | Value |
| ------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------- |
| `service.nameOverride` | MongoDB(®) service name | `""` |
| `service.type` | Kubernetes Service type (only for standalone architecture) | `ClusterIP` |
| `service.portName` | MongoDB(®) service port name (only for standalone architecture) | `mongodb` |
| `service.ports.mongodb` | MongoDB(®) service port. | `27017` |
| `service.nodePorts.mongodb` | Port to bind to for NodePort and LoadBalancer service types (only for standalone architecture) | `""` |
| `service.clusterIP` | MongoDB(®) service cluster IP (only for standalone architecture) | `""` |
| `service.externalIPs` | Specify the externalIP value ClusterIP service type (only for standalone architecture) | `[]` |
| `service.loadBalancerIP` | loadBalancerIP for MongoDB(®) Service (only for standalone architecture) | `""` |
| `service.loadBalancerClass` | loadBalancerClass for MongoDB(®) Service (only for standalone architecture) | `""` |
| `service.loadBalancerSourceRanges` | Address(es) that are allowed when service is LoadBalancer (only for standalone architecture) | `[]` |
| `service.allocateLoadBalancerNodePorts` | Wheter to allocate node ports when service type is LoadBalancer | `true` |
| `service.extraPorts` | Extra ports to expose (normally used with the `sidecar` value) | `[]` |
| `service.annotations` | Provide any additional annotations that may be required | `{}` |
| `service.externalTrafficPolicy` | service external traffic policy (only for standalone architecture) | `Local` |
| `service.sessionAffinity` | Control where client requests go, to the same pod or round-robin | `None` |
| `service.sessionAffinityConfig` | Additional settings for the sessionAffinity | `{}` |
| `service.headless.annotations` | Annotations for the headless service. | `{}` |
| `externalAccess.enabled` | Enable Kubernetes external cluster access to MongoDB(®) nodes (only for replicaset architecture) | `false` |
| `externalAccess.autoDiscovery.enabled` | Enable using an init container to auto-detect external IPs by querying the K8s API | `false` |
| `externalAccess.autoDiscovery.image.registry` | Init container auto-discovery image registry | `REGISTRY_NAME` |
| `externalAccess.autoDiscovery.image.repository` | Init container auto-discovery image repository | `REPOSITORY_NAME/kubectl` |
| `externalAccess.autoDiscovery.image.digest` | Init container auto-discovery image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | `""` |
| `externalAccess.autoDiscovery.image.pullPolicy` | Init container auto-discovery image pull policy | `IfNotPresent` |
| `externalAccess.autoDiscovery.image.pullSecrets` | Init container auto-discovery image pull secrets | `[]` |
| `externalAccess.autoDiscovery.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if externalAccess.autoDiscovery.resources is set (externalAccess.autoDiscovery.resources is recommended for production). | `none` |
| `externalAccess.autoDiscovery.resources` | Set container requests and limits for different resources like CPU or memory (essential for production workloads) | `{}` |
| `externalAccess.externalMaster.enabled` | Use external master for bootstrapping | `false` |
| `externalAccess.externalMaster.host` | External master host to bootstrap from | `""` |
| `externalAccess.externalMaster.port` | Port for MongoDB(®) service external master host | `27017` |
| `externalAccess.service.type` | Kubernetes Service type for external access. Allowed values: NodePort, LoadBalancer or ClusterIP | `LoadBalancer` |
| `externalAccess.service.portName` | MongoDB(®) port name used for external access when service type is LoadBalancer | `mongodb` |
| `externalAccess.service.ports.mongodb` | MongoDB(®) port used for external access when service type is LoadBalancer | `27017` |
| `externalAccess.service.loadBalancerIPs` | Array of load balancer IPs for MongoDB(®) nodes | `[]` |
| `externalAccess.service.loadBalancerClass` | loadBalancerClass when service type is LoadBalancer | `""` |
| `externalAccess.service.loadBalancerSourceRanges` | Address(es) that are allowed when service is LoadBalancer | `[]` |
| `externalAccess.service.allocateLoadBalancerNodePorts` | Wheter to allocate node ports when service type is LoadBalancer | `true` |
| `externalAccess.service.externalTrafficPolicy` | MongoDB(®) service external traffic policy | `Local` |
| `externalAccess.service.nodePorts` | Array of node ports used to configure MongoDB(®) advertised hostname when service type is NodePort | `[]` |
| `externalAccess.service.domain` | Domain or external IP used to configure MongoDB(®) advertised hostname when service type is NodePort | `""` |
| `externalAccess.service.extraPorts` | Extra ports to expose (normally used with the `sidecar` value) | `[]` |
| `externalAccess.service.annotations` | Service annotations for external access | `{}` |
| `externalAccess.service.sessionAffinity` | Control where client requests go, to the same pod or round-robin | `None` |
| `externalAccess.service.sessionAffinityConfig` | Additional settings for the sessionAffinity | `{}` |
| `externalAccess.hidden.enabled` | Enable Kubernetes external cluster access to MongoDB(®) hidden nodes | `false` |
| `externalAccess.hidden.service.type` | Kubernetes Service type for external access. Allowed values: NodePort or LoadBalancer | `LoadBalancer` |
| `externalAccess.hidden.service.portName` | MongoDB(®) port name used for external access when service type is LoadBalancer | `mongodb` |
| `externalAccess.hidden.service.ports.mongodb` | MongoDB(®) port used for external access when service type is LoadBalancer | `27017` |
| `externalAccess.hidden.service.loadBalancerIPs` | Array of load balancer IPs for MongoDB(®) nodes | `[]` |
| `externalAccess.hidden.service.loadBalancerClass` | loadBalancerClass when service type is LoadBalancer | `""` |
| `externalAccess.hidden.service.loadBalancerSourceRanges` | Address(es) that are allowed when service is LoadBalancer | `[]` |
| `externalAccess.hidden.service.allocateLoadBalancerNodePorts` | Wheter to allocate node ports when service type is LoadBalancer | `true` |
| `externalAccess.hidden.service.externalTrafficPolicy` | MongoDB(®) service external traffic policy | `Local` |
| `externalAccess.hidden.service.nodePorts` | Array of node ports used to configure MongoDB(®) advertised hostname when service type is NodePort. Length must be the same as replicaCount | `[]` |
| `externalAccess.hidden.service.domain` | Domain or external IP used to configure MongoDB(®) advertised hostname when service type is NodePort | `""` |
| `externalAccess.hidden.service.extraPorts` | Extra ports to expose (normally used with the `sidecar` value) | `[]` |
| `externalAccess.hidden.service.annotations` | Service annotations for external access | `{}` |
| `externalAccess.hidden.service.sessionAffinity` | Control where client requests go, to the same pod or round-robin | `None` |
| `externalAccess.hidden.service.sessionAffinityConfig` | Additional settings for the sessionAffinity | `{}` |
### Network policy parameters
| Name | Description | Value |
| -------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | ------------------- |
| `networkPolicy.enabled` | Specifies whether a NetworkPolicy should be created | `true` |
| `networkPolicy.allowExternal` | Don't require server label for connections | `true` |
| `networkPolicy.allowExternalEgress` | Allow the pod to access any range of port and all destinations. | `true` |
| `networkPolicy.extraIngress` | Add extra ingress rules to the NetworkPolice | `[]` |
| `networkPolicy.extraEgress` | Add extra ingress rules to the NetworkPolicy | `[]` |
| `networkPolicy.ingressNSMatchLabels` | Labels to match to allow traffic from other namespaces | `{}` |
| `networkPolicy.ingressNSPodMatchLabels` | Pod labels to match to allow traffic from other namespaces | `{}` |
| `persistence.enabled` | Enable MongoDB(®) data persistence using PVC | `true` |
| `persistence.name` | Name of the PVC and mounted volume | `datadir` |
| `persistence.medium` | Provide a medium for `emptyDir` volumes. | `""` |
| `persistence.existingClaim` | Provide an existing `PersistentVolumeClaim` (only when `architecture=standalone`) | `""` |
| `persistence.resourcePolicy` | Setting it to "keep" to avoid removing PVCs during a helm delete operation. Leaving it empty will delete PVCs after the chart deleted | `""` |
| `persistence.storageClass` | PVC Storage Class for MongoDB(®) data volume | `""` |
| `persistence.accessModes` | PV Access Mode | `["ReadWriteOnce"]` |
| `persistence.size` | PVC Storage Request for MongoDB(®) data volume | `8Gi` |
| `persistence.annotations` | PVC annotations | `{}` |
| `persistence.mountPath` | Path to mount the volume at | `/bitnami/mongodb` |
| `persistence.subPath` | Subdirectory of the volume to mount at | `""` |
| `persistence.volumeClaimTemplates.selector` | A label query over volumes to consider for binding (e.g. when using local volumes) | `{}` |
| `persistence.volumeClaimTemplates.requests` | Custom PVC requests attributes | `{}` |
| `persistence.volumeClaimTemplates.dataSource` | Add dataSource to the VolumeClaimTemplate | `{}` |
| `persistentVolumeClaimRetentionPolicy.enabled` | Enable Persistent volume retention policy for MongoDB(®) Statefulset | `false` |
| `persistentVolumeClaimRetentionPolicy.whenScaled` | Volume retention behavior when the replica count of the StatefulSet is reduced | `Retain` |
| `persistentVolumeClaimRetentionPolicy.whenDeleted` | Volume retention behavior that applies when the StatefulSet is deleted | `Retain` |
### Backup parameters
| Name | Description | Value |
| ------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------- | ------------------- |
| `backup.enabled` | Enable the logical dump of the database "regularly" | `false` |
| `backup.cronjob.schedule` | Set the cronjob parameter schedule | `@daily` |
| `backup.cronjob.concurrencyPolicy` | Set the cronjob parameter concurrencyPolicy | `Allow` |
| `backup.cronjob.failedJobsHistoryLimit` | Set the cronjob parameter failedJobsHistoryLimit | `1` |
| `backup.cronjob.successfulJobsHistoryLimit` | Set the cronjob parameter successfulJobsHistoryLimit | `3` |
| `backup.cronjob.startingDeadlineSeconds` | Set the cronjob parameter startingDeadlineSeconds | `""` |
| `backup.cronjob.ttlSecondsAfterFinished` | Set the cronjob parameter ttlSecondsAfterFinished | `""` |
| `backup.cronjob.restartPolicy` | Set the cronjob parameter restartPolicy | `OnFailure` |
| `backup.cronjob.containerSecurityContext.enabled` | Enabled containers' Security Context | `true` |
| `backup.cronjob.containerSecurityContext.seLinuxOptions` | Set SELinux options in container | `nil` |
| `backup.cronjob.containerSecurityContext.runAsUser` | Set containers' Security Context runAsUser | `1001` |
| `backup.cronjob.containerSecurityContext.runAsGroup` | Set containers' Security Context runAsGroup | `0` |
| `backup.cronjob.containerSecurityContext.runAsNonRoot` | Set container's Security Context runAsNonRoot | `true` |
| `backup.cronjob.containerSecurityContext.privileged` | Set container's Security Context privileged | `false` |
| `backup.cronjob.containerSecurityContext.readOnlyRootFilesystem` | Set container's Security Context readOnlyRootFilesystem | `false` |
| `backup.cronjob.containerSecurityContext.allowPrivilegeEscalation` | Set container's Security Context allowPrivilegeEscalation | `false` |
| `backup.cronjob.containerSecurityContext.capabilities.drop` | List of capabilities to be dropped | `["ALL"]` |
| `backup.cronjob.containerSecurityContext.seccompProfile.type` | Set container's Security Context seccomp profile | `RuntimeDefault` |
| `backup.cronjob.command` | Set backup container's command to run | `[]` |
| `backup.cronjob.labels` | Set the cronjob labels | `{}` |
| `backup.cronjob.annotations` | Set the cronjob annotations | `{}` |
| `backup.cronjob.storage.existingClaim` | Provide an existing `PersistentVolumeClaim` (only when `architecture=standalone`) | `""` |
| `backup.cronjob.storage.resourcePolicy` | Setting it to "keep" to avoid removing PVCs during a helm delete operation. Leaving it empty will delete PVCs after the chart deleted | `""` |
| `backup.cronjob.storage.storageClass` | PVC Storage Class for the backup data volume | `""` |
| `backup.cronjob.storage.accessModes` | PV Access Mode | `["ReadWriteOnce"]` |
| `backup.cronjob.storage.size` | PVC Storage Request for the backup data volume | `8Gi` |
| `backup.cronjob.storage.annotations` | PVC annotations | `{}` |
| `backup.cronjob.storage.mountPath` | Path to mount the volume at | `/backup/mongodb` |
| `backup.cronjob.storage.subPath` | Subdirectory of the volume to mount at | `""` |
| `backup.cronjob.storage.volumeClaimTemplates.selector` | A label query over volumes to consider for binding (e.g. when using local volumes) | `{}` |
### RBAC parameters
| Name | Description | Value |
| --------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `serviceAccount.create` | Enable creation of ServiceAccount for MongoDB(®) pods | `true` |
| `serviceAccount.name` | Name of the created serviceAccount | `""` |
| `serviceAccount.annotations` | Additional Service Account annotations | `{}` |
| `serviceAccount.automountServiceAccountToken` | Allows auto mount of ServiceAccountToken on the serviceAccount created | `false` |
| `rbac.create` | Whether to create & use RBAC resources or not | `false` |
| `rbac.rules` | Custom rules to create following the role specification | `[]` |
| `podSecurityPolicy.create` | Whether to create a PodSecurityPolicy. WARNING: PodSecurityPolicy is deprecated in Kubernetes v1.21 or later, unavailable in v1.25 or later | `false` |
| `podSecurityPolicy.allowPrivilegeEscalation` | Enable privilege escalation | `false` |
| `podSecurityPolicy.privileged` | Allow privileged | `false` |
| `podSecurityPolicy.spec` | Specify the full spec to use for Pod Security Policy | `{}` |
### Volume Permissions parameters
| Name | Description | Value |
| -------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------- |
| `volumePermissions.enabled` | Enable init container that changes the owner and group of the persistent volume(s) mountpoint to `runAsUser:fsGroup` | `false` |
| `volumePermissions.image.registry` | Init container volume-permissions image registry | `REGISTRY_NAME` |
| `volumePermissions.image.repository` | Init container volume-permissions image repository | `REPOSITORY_NAME/os-shell` |
| `volumePermissions.image.digest` | Init container volume-permissions image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | `""` |
| `volumePermissions.image.pullPolicy` | Init container volume-permissions image pull policy | `IfNotPresent` |
| `volumePermissions.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` |
| `volumePermissions.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if volumePermissions.resources is set (volumePermissions.resources is recommended for production). | `none` |
| `volumePermissions.resources` | Set container requests and limits for different resources like CPU or memory (essential for production workloads) | `{}` |
| `volumePermissions.securityContext.seLinuxOptions` | Set SELinux options in container | `nil` |
| `volumePermissions.securityContext.runAsUser` | User ID for the volumePermissions container | `0` |
### Arbiter parameters
| Name | Description | Value |
| ----------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------- |
| `arbiter.enabled` | Enable deploying the arbiter | `true` |
| `arbiter.automountServiceAccountToken` | Mount Service Account token in pod | `false` |
| `arbiter.hostAliases` | Add deployment host aliases | `[]` |
| `arbiter.configuration` | Arbiter configuration file to be used | `""` |
| `arbiter.existingConfigmap` | Name of existing ConfigMap with Arbiter configuration | `""` |
| `arbiter.command` | Override default container command (useful when using custom images) | `[]` |
| `arbiter.args` | Override default container args (useful when using custom images) | `[]` |
| `arbiter.extraFlags` | Arbiter additional command line flags | `[]` |
| `arbiter.extraEnvVars` | Extra environment variables to add to Arbiter pods | `[]` |
| `arbiter.extraEnvVarsCM` | Name of existing ConfigMap containing extra env vars | `""` |
| `arbiter.extraEnvVarsSecret` | Name of existing Secret containing extra env vars (in case of sensitive data) | `""` |
| `arbiter.annotations` | Additional labels to be added to the Arbiter statefulset | `{}` |
| `arbiter.labels` | Annotations to be added to the Arbiter statefulset | `{}` |
| `arbiter.topologySpreadConstraints` | MongoDB(®) Spread Constraints for arbiter Pods | `[]` |
| `arbiter.lifecycleHooks` | LifecycleHook for the Arbiter container to automate configuration before or after startup | `{}` |
| `arbiter.terminationGracePeriodSeconds` | Arbiter Termination Grace Period | `""` |
| `arbiter.updateStrategy.type` | Strategy that will be employed to update Pods in the StatefulSet | `RollingUpdate` |
| `arbiter.podManagementPolicy` | Pod management policy for MongoDB(®) | `OrderedReady` |
| `arbiter.schedulerName` | Name of the scheduler (other than default) to dispatch pods | `""` |
| `arbiter.podAffinityPreset` | Arbiter Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `arbiter.podAntiAffinityPreset` | Arbiter Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `soft` |
| `arbiter.nodeAffinityPreset.type` | Arbiter Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `arbiter.nodeAffinityPreset.key` | Arbiter Node label key to match Ignored if `affinity` is set. | `""` |
| `arbiter.nodeAffinityPreset.values` | Arbiter Node label values to match. Ignored if `affinity` is set. | `[]` |
| `arbiter.affinity` | Arbiter Affinity for pod assignment | `{}` |
| `arbiter.nodeSelector` | Arbiter Node labels for pod assignment | `{}` |
| `arbiter.tolerations` | Arbiter Tolerations for pod assignment | `[]` |
| `arbiter.podLabels` | Arbiter pod labels | `{}` |
| `arbiter.podAnnotations` | Arbiter Pod annotations | `{}` |
| `arbiter.priorityClassName` | Name of the existing priority class to be used by Arbiter pod(s) | `""` |
| `arbiter.runtimeClassName` | Name of the runtime class to be used by Arbiter pod(s) | `""` |
| `arbiter.podSecurityContext.enabled` | Enable Arbiter pod(s)' Security Context | `true` |
| `arbiter.podSecurityContext.fsGroupChangePolicy` | Set filesystem group change policy | `Always` |
| `arbiter.podSecurityContext.supplementalGroups` | Set filesystem extra groups | `[]` |
| `arbiter.podSecurityContext.fsGroup` | Group ID for the volumes of the Arbiter pod(s) | `1001` |
| `arbiter.podSecurityContext.sysctls` | sysctl settings of the Arbiter pod(s)' | `[]` |
| `arbiter.containerSecurityContext.enabled` | Enabled containers' Security Context | `true` |
| `arbiter.containerSecurityContext.seLinuxOptions` | Set SELinux options in container | `nil` |
| `arbiter.containerSecurityContext.runAsUser` | Set containers' Security Context runAsUser | `1001` |
| `arbiter.containerSecurityContext.runAsGroup` | Set containers' Security Context runAsGroup | `0` |
| `arbiter.containerSecurityContext.runAsNonRoot` | Set container's Security Context runAsNonRoot | `true` |
| `arbiter.containerSecurityContext.privileged` | Set container's Security Context privileged | `false` |
| `arbiter.containerSecurityContext.readOnlyRootFilesystem` | Set container's Security Context readOnlyRootFilesystem | `false` |
| `arbiter.containerSecurityContext.allowPrivilegeEscalation` | Set container's Security Context allowPrivilegeEscalation | `false` |
| `arbiter.containerSecurityContext.capabilities.drop` | List of capabilities to be dropped | `["ALL"]` |
| `arbiter.containerSecurityContext.seccompProfile.type` | Set container's Security Context seccomp profile | `RuntimeDefault` |
| `arbiter.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if arbiter.resources is set (arbiter.resources is recommended for production). | `none` |
| `arbiter.resources` | Set container requests and limits for different resources like CPU or memory (essential for production workloads) | `{}` |
| `arbiter.containerPorts.mongodb` | MongoDB(®) arbiter container port | `27017` |
| `arbiter.livenessProbe.enabled` | Enable livenessProbe | `true` |
| `arbiter.livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `30` |
| `arbiter.livenessProbe.periodSeconds` | Period seconds for livenessProbe | `20` |
| `arbiter.livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `10` |
| `arbiter.livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `6` |
| `arbiter.livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` |
| `arbiter.readinessProbe.enabled` | Enable readinessProbe | `true` |
| `arbiter.readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `5` |
| `arbiter.readinessProbe.periodSeconds` | Period seconds for readinessProbe | `20` |
| `arbiter.readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `10` |
| `arbiter.readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `6` |
| `arbiter.readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` |
| `arbiter.startupProbe.enabled` | Enable startupProbe | `false` |
| `arbiter.startupProbe.initialDelaySeconds` | Initial delay seconds for startupProbe | `5` |
| `arbiter.startupProbe.periodSeconds` | Period seconds for startupProbe | `10` |
| `arbiter.startupProbe.timeoutSeconds` | Timeout seconds for startupProbe | `5` |
| `arbiter.startupProbe.failureThreshold` | Failure threshold for startupProbe | `30` |
| `arbiter.startupProbe.successThreshold` | Success threshold for startupProbe | `1` |
| `arbiter.customLivenessProbe` | Override default liveness probe for Arbiter containers | `{}` |
| `arbiter.customReadinessProbe` | Override default readiness probe for Arbiter containers | `{}` |
| `arbiter.customStartupProbe` | Override default startup probe for Arbiter containers | `{}` |
| `arbiter.initContainers` | Add additional init containers for the Arbiter pod(s) | `[]` |
| `arbiter.sidecars` | Add additional sidecar containers for the Arbiter pod(s) | `[]` |
| `arbiter.extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for the Arbiter container(s) | `[]` |
| `arbiter.extraVolumes` | Optionally specify extra list of additional volumes to the Arbiter statefulset | `[]` |
| `arbiter.pdb.create` | Enable/disable a Pod Disruption Budget creation for Arbiter pod(s) | `false` |
| `arbiter.pdb.minAvailable` | Minimum number/percentage of Arbiter pods that should remain scheduled | `1` |
| `arbiter.pdb.maxUnavailable` | Maximum number/percentage of Arbiter pods that may be made unavailable | `""` |
| `arbiter.service.nameOverride` | The arbiter service name | `""` |
| `arbiter.service.ports.mongodb` | MongoDB(®) service port | `27017` |
| `arbiter.service.extraPorts` | Extra ports to expose (normally used with the `sidecar` value) | `[]` |
| `arbiter.service.annotations` | Provide any additional annotations that may be required | `{}` |
| `arbiter.service.headless.annotations` | Annotations for the headless service. | `{}` |
### Hidden Node parameters
| Name | Description | Value |
| ---------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------- |
| `hidden.enabled` | Enable deploying the hidden nodes | `false` |
| `hidden.automountServiceAccountToken` | Mount Service Account token in pod | `false` |
| `hidden.hostAliases` | Add deployment host aliases | `[]` |
| `hidden.configuration` | Hidden node configuration file to be used | `""` |
| `hidden.existingConfigmap` | Name of existing ConfigMap with Hidden node configuration | `""` |
| `hidden.command` | Override default container command (useful when using custom images) | `[]` |
| `hidden.args` | Override default container args (useful when using custom images) | `[]` |
| `hidden.extraFlags` | Hidden node additional command line flags | `[]` |
| `hidden.extraEnvVars` | Extra environment variables to add to Hidden node pods | `[]` |
| `hidden.extraEnvVarsCM` | Name of existing ConfigMap containing extra env vars | `""` |
| `hidden.extraEnvVarsSecret` | Name of existing Secret containing extra env vars (in case of sensitive data) | `""` |
| `hidden.annotations` | Additional labels to be added to thehidden node statefulset | `{}` |
| `hidden.labels` | Annotations to be added to the hidden node statefulset | `{}` |
| `hidden.topologySpreadConstraints` | MongoDB(®) Spread Constraints for hidden Pods | `[]` |
| `hidden.lifecycleHooks` | LifecycleHook for the Hidden container to automate configuration before or after startup | `{}` |
| `hidden.replicaCount` | Number of hidden nodes (only when `architecture=replicaset`) | `1` |
| `hidden.terminationGracePeriodSeconds` | Hidden Termination Grace Period | `""` |
| `hidden.updateStrategy.type` | Strategy that will be employed to update Pods in the StatefulSet | `RollingUpdate` |
| `hidden.podManagementPolicy` | Pod management policy for hidden node | `OrderedReady` |
| `hidden.schedulerName` | Name of the scheduler (other than default) to dispatch pods | `""` |
| `hidden.podAffinityPreset` | Hidden node Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `hidden.podAntiAffinityPreset` | Hidden node Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `soft` |
| `hidden.nodeAffinityPreset.type` | Hidden Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `hidden.nodeAffinityPreset.key` | Hidden Node label key to match Ignored if `affinity` is set. | `""` |
| `hidden.nodeAffinityPreset.values` | Hidden Node label values to match. Ignored if `affinity` is set. | `[]` |
| `hidden.affinity` | Hidden node Affinity for pod assignment | `{}` |
| `hidden.nodeSelector` | Hidden node Node labels for pod assignment | `{}` |
| `hidden.tolerations` | Hidden node Tolerations for pod assignment | `[]` |
| `hidden.podLabels` | Hidden node pod labels | `{}` |
| `hidden.podAnnotations` | Hidden node Pod annotations | `{}` |
| `hidden.priorityClassName` | Name of the existing priority class to be used by hidden node pod(s) | `""` |
| `hidden.runtimeClassName` | Name of the runtime class to be used by hidden node pod(s) | `""` |
| `hidden.podSecurityContext.enabled` | Enable Hidden pod(s)' Security Context | `true` |
| `hidden.podSecurityContext.fsGroupChangePolicy` | Set filesystem group change policy | `Always` |
| `hidden.podSecurityContext.supplementalGroups` | Set filesystem extra groups | `[]` |
| `hidden.podSecurityContext.fsGroup` | Group ID for the volumes of the Hidden pod(s) | `1001` |
| `hidden.podSecurityContext.sysctls` | sysctl settings of the Hidden pod(s)' | `[]` |
| `hidden.containerSecurityContext.enabled` | Enabled containers' Security Context | `true` |
| `hidden.containerSecurityContext.seLinuxOptions` | Set SELinux options in container | `nil` |
| `hidden.containerSecurityContext.runAsUser` | Set containers' Security Context runAsUser | `1001` |
| `hidden.containerSecurityContext.runAsGroup` | Set containers' Security Context runAsGroup | `0` |
| `hidden.containerSecurityContext.runAsNonRoot` | Set container's Security Context runAsNonRoot | `true` |
| `hidden.containerSecurityContext.privileged` | Set container's Security Context privileged | `false` |
| `hidden.containerSecurityContext.readOnlyRootFilesystem` | Set container's Security Context readOnlyRootFilesystem | `false` |
| `hidden.containerSecurityContext.allowPrivilegeEscalation` | Set container's Security Context allowPrivilegeEscalation | `false` |
| `hidden.containerSecurityContext.capabilities.drop` | List of capabilities to be dropped | `["ALL"]` |
| `hidden.containerSecurityContext.seccompProfile.type` | Set container's Security Context seccomp profile | `RuntimeDefault` |
| `hidden.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if hidden.resources is set (hidden.resources is recommended for production). | `none` |
| `hidden.resources` | Set container requests and limits for different resources like CPU or memory (essential for production workloads) | `{}` |
| `hidden.containerPorts.mongodb` | MongoDB(®) hidden container port | `27017` |
| `hidden.livenessProbe.enabled` | Enable livenessProbe | `true` |
| `hidden.livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `30` |
| `hidden.livenessProbe.periodSeconds` | Period seconds for livenessProbe | `20` |
| `hidden.livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `10` |
| `hidden.livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `6` |
| `hidden.livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` |
| `hidden.readinessProbe.enabled` | Enable readinessProbe | `true` |
| `hidden.readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `5` |
| `hidden.readinessProbe.periodSeconds` | Period seconds for readinessProbe | `20` |
| `hidden.readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `10` |
| `hidden.readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `6` |
| `hidden.readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` |
| `hidden.startupProbe.enabled` | Enable startupProbe | `false` |
| `hidden.startupProbe.initialDelaySeconds` | Initial delay seconds for startupProbe | `5` |
| `hidden.startupProbe.periodSeconds` | Period seconds for startupProbe | `10` |
| `hidden.startupProbe.timeoutSeconds` | Timeout seconds for startupProbe | `5` |
| `hidden.startupProbe.failureThreshold` | Failure threshold for startupProbe | `30` |
| `hidden.startupProbe.successThreshold` | Success threshold for startupProbe | `1` |
| `hidden.customLivenessProbe` | Override default liveness probe for hidden node containers | `{}` |
| `hidden.customReadinessProbe` | Override default readiness probe for hidden node containers | `{}` |
| `hidden.customStartupProbe` | Override default startup probe for MongoDB(®) containers | `{}` |
| `hidden.initContainers` | Add init containers to the MongoDB(®) Hidden pods. | `[]` |
| `hidden.sidecars` | Add additional sidecar containers for the hidden node pod(s) | `[]` |
| `hidden.extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for the hidden node container(s) | `[]` |
| `hidden.extraVolumes` | Optionally specify extra list of additional volumes to the hidden node statefulset | `[]` |
| `hidden.pdb.create` | Enable/disable a Pod Disruption Budget creation for hidden node pod(s) | `false` |
| `hidden.pdb.minAvailable` | Minimum number/percentage of hidden node pods that should remain scheduled | `1` |
| `hidden.pdb.maxUnavailable` | Maximum number/percentage of hidden node pods that may be made unavailable | `""` |
| `hidden.persistence.enabled` | Enable hidden node data persistence using PVC | `true` |
| `hidden.persistence.medium` | Provide a medium for `emptyDir` volumes. | `""` |
| `hidden.persistence.storageClass` | PVC Storage Class for hidden node data volume | `""` |
| `hidden.persistence.accessModes` | PV Access Mode | `["ReadWriteOnce"]` |
| `hidden.persistence.size` | PVC Storage Request for hidden node data volume | `8Gi` |
| `hidden.persistence.annotations` | PVC annotations | `{}` |
| `hidden.persistence.mountPath` | The path the volume will be mounted at, useful when using different MongoDB(®) images. | `/bitnami/mongodb` |
| `hidden.persistence.subPath` | The subdirectory of the volume to mount to, useful in dev environments | `""` |
| `hidden.persistence.volumeClaimTemplates.selector` | A label query over volumes to consider for binding (e.g. when using local volumes) | `{}` |
| `hidden.persistence.volumeClaimTemplates.requests` | Custom PVC requests attributes | `{}` |
| `hidden.persistence.volumeClaimTemplates.dataSource` | Set volumeClaimTemplate dataSource | `{}` |
| `hidden.service.portName` | MongoDB(®) service port name | `mongodb` |
| `hidden.service.ports.mongodb` | MongoDB(®) service port | `27017` |
| `hidden.service.extraPorts` | Extra ports to expose (normally used with the `sidecar` value) | `[]` |
| `hidden.service.annotations` | Provide any additional annotations that may be required | `{}` |
| `hidden.service.headless.annotations` | Annotations for the headless service. | `{}` |
### Metrics parameters
| Name | Description | Value |
| -------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------- |
| `metrics.enabled` | Enable using a sidecar Prometheus exporter | `false` |
| `metrics.image.registry` | MongoDB(®) Prometheus exporter image registry | `REGISTRY_NAME` |
| `metrics.image.repository` | MongoDB(®) Prometheus exporter image repository | `REPOSITORY_NAME/mongodb-exporter` |
| `metrics.image.digest` | MongoDB(®) image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | `""` |
| `metrics.image.pullPolicy` | MongoDB(®) Prometheus exporter image pull policy | `IfNotPresent` |
| `metrics.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` |
| `metrics.username` | String with username for the metrics exporter | `""` |
| `metrics.password` | String with password for the metrics exporter | `""` |
| `metrics.compatibleMode` | Enables old style mongodb-exporter metrics | `true` |
| `metrics.collector.all` | Enable all collectors. Same as enabling all individual metrics | `false` |
| `metrics.collector.diagnosticdata` | Boolean Enable collecting metrics from getDiagnosticData | `true` |
| `metrics.collector.replicasetstatus` | Boolean Enable collecting metrics from replSetGetStatus | `true` |
| `metrics.collector.dbstats` | Boolean Enable collecting metrics from dbStats | `false` |
| `metrics.collector.topmetrics` | Boolean Enable collecting metrics from top admin command | `false` |
| `metrics.collector.indexstats` | Boolean Enable collecting metrics from $indexStats | `false` |
| `metrics.collector.collstats` | Boolean Enable collecting metrics from $collStats | `false` |
| `metrics.collector.collstatsColls` | List of \<databases\>.\<collections\> to get $collStats | `[]` |
| `metrics.collector.indexstatsColls` | List - List of \<databases\>.\<collections\> to get $indexStats | `[]` |
| `metrics.collector.collstatsLimit` | Number - Disable collstats, dbstats, topmetrics and indexstats collector if there are more than \<n\> collections. 0=No limit | `0` |
| `metrics.extraFlags` | String with extra flags to the metrics exporter | `""` |
| `metrics.command` | Override default container command (useful when using custom images) | `[]` |
| `metrics.args` | Override default container args (useful when using custom images) | `[]` |
| `metrics.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if metrics.resources is set (metrics.resources is recommended for production). | `none` |
| `metrics.resources` | Set container requests and limits for different resources like CPU or memory (essential for production workloads) | `{}` |
| `metrics.containerPort` | Port of the Prometheus metrics container | `9216` |
| `metrics.service.annotations` | Annotations for Prometheus Exporter pods. Evaluated as a template. | `{}` |
| `metrics.service.type` | Type of the Prometheus metrics service | `ClusterIP` |
| `metrics.service.ports.metrics` | Port of the Prometheus metrics service | `9216` |
| `metrics.service.extraPorts` | Extra ports to expose (normally used with the `sidecar` value) | `[]` |
| `metrics.livenessProbe.enabled` | Enable livenessProbe | `true` |
| `metrics.livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `15` |
| `metrics.livenessProbe.periodSeconds` | Period seconds for livenessProbe | `5` |
| `metrics.livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `10` |
| `metrics.livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `3` |
| `metrics.livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` |
| `metrics.readinessProbe.enabled` | Enable readinessProbe | `true` |
| `metrics.readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `5` |
| `metrics.readinessProbe.periodSeconds` | Period seconds for readinessProbe | `5` |
| `metrics.readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `10` |
| `metrics.readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `3` |
| `metrics.readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` |
| `metrics.startupProbe.enabled` | Enable startupProbe | `false` |
| `metrics.startupProbe.initialDelaySeconds` | Initial delay seconds for startupProbe | `5` |
| `metrics.startupProbe.periodSeconds` | Period seconds for startupProbe | `10` |
| `metrics.startupProbe.timeoutSeconds` | Timeout seconds for startupProbe | `5` |
| `metrics.startupProbe.failureThreshold` | Failure threshold for startupProbe | `30` |
| `metrics.startupProbe.successThreshold` | Success threshold for startupProbe | `1` |
| `metrics.customLivenessProbe` | Override default liveness probe for MongoDB(®) containers | `{}` |
| `metrics.customReadinessProbe` | Override default readiness probe for MongoDB(®) containers | `{}` |
| `metrics.customStartupProbe` | Override default startup probe for MongoDB(®) containers | `{}` |
| `metrics.extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for the metrics container(s) | `[]` |
| `metrics.serviceMonitor.enabled` | Create ServiceMonitor Resource for scraping metrics using Prometheus Operator | `false` |
| `metrics.serviceMonitor.namespace` | Namespace which Prometheus is running in | `""` |
| `metrics.serviceMonitor.interval` | Interval at which metrics should be scraped | `30s` |
| `metrics.serviceMonitor.scrapeTimeout` | Specify the timeout after which the scrape is ended | `""` |
| `metrics.serviceMonitor.relabelings` | RelabelConfigs to apply to samples before scraping. | `[]` |
| `metrics.serviceMonitor.metricRelabelings` | MetricsRelabelConfigs to apply to samples before ingestion. | `[]` |
| `metrics.serviceMonitor.labels` | Used to pass Labels that are used by the Prometheus installed in your cluster to select Service Monitors to work with | `{}` |
| `metrics.serviceMonitor.selector` | Prometheus instance selector labels | `{}` |
| `metrics.serviceMonitor.honorLabels` | Specify honorLabels parameter to add the scrape endpoint | `false` |
| `metrics.serviceMonitor.jobLabel` | The name of the label on the target service to use as the job name in prometheus. | `""` |
| `metrics.prometheusRule.enabled` | Set this to true to create prometheusRules for Prometheus operator | `false` |
| `metrics.prometheusRule.additionalLabels` | Additional labels that can be used so prometheusRules will be discovered by Prometheus | `{}` |
| `metrics.prometheusRule.namespace` | Namespace where prometheusRules resource should be created | `""` |
| `metrics.prometheusRule.rules` | Rules to be created, check values for an example | `[]` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
helm install my-release \
--set auth.rootPassword=secretpassword,auth.username=my-user,auth.password=my-password,auth.database=my-database \
oci://REGISTRY_NAME/REPOSITORY_NAME/mongodb
```
> Note: You need to substitute the placeholders `REGISTRY_NAME` and `REPOSITORY_NAME` with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use `REGISTRY_NAME=registry-1.docker.io` and `REPOSITORY_NAME=bitnamicharts`.
The above command sets the MongoDB(®) `root` account password to `secretpassword`. Additionally, it creates a standard database user named `my-user`, with the password `my-password`, who has access to a database named `my-database`.
> NOTE: Once this chart is deployed, it is not possible to change the application's access credentials, such as usernames or passwords, using Helm. To change these application credentials after deployment, delete any persistent volumes (PVs) used by the chart and re-deploy it, or use the application's built-in administrative tools if available.
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
```console
helm install my-release -f values.yaml oci://REGISTRY_NAME/REPOSITORY_NAME/mongodb
```
> Note: You need to substitute the placeholders `REGISTRY_NAME` and `REPOSITORY_NAME` with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use `REGISTRY_NAME=registry-1.docker.io` and `REPOSITORY_NAME=bitnamicharts`.
> **Tip**: You can use the default [values.yaml](https://github.com/bitnami/charts/tree/main/bitnami/mongodb/values.yaml)
## Configuration and installation details
### Resource requests and limits
Bitnami charts allow setting resource requests and limits for all containers inside the chart deployment. These are inside the `resources` value (check parameter table). Setting requests is essential for production workloads and these should be adapted to your specific use case.
To make this process easier, the chart contains the `resourcesPreset` values, which automatically sets the `resources` section according to different presets. Check these presets in [the bitnami/common chart](https://github.com/bitnami/charts/blob/main/bitnami/common/templates/_resources.tpl#L15). However, in production workloads using `resourcePreset` is discouraged as it may not fully adapt to your specific needs. Find more information on container resource management in the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
### [Rolling vs Immutable tags](https://docs.bitnami.com/tutorials/understand-rolling-tags-containers)
It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.
Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.
### Customize a new MongoDB instance
The [Bitnami MongoDB(®) image](https://github.com/bitnami/containers/tree/main/bitnami/mongodb) supports the use of custom scripts to initialize a fresh instance. In order to execute the scripts, two options are available:
- Specify them using the `initdbScripts` parameter as dict.
- Define an external Kubernetes ConfigMap with all the initialization scripts by setting the `initdbScriptsConfigMap` parameter. Note that this will override the previous option.
The allowed script extensions are `.sh` and `.js`.
### Replicaset: Access MongoDB(®) nodes from outside the cluster
In order to access MongoDB(®) nodes from outside the cluster when using a replicaset architecture, a specific service per MongoDB(®) pod will be created. There are two ways of configuring external access:
- Using LoadBalancer services
- Using NodePort services.
#### Use LoadBalancer services
Two alternatives are available to use *LoadBalancer* services:
- Use random load balancer IP addresses using an *initContainer* that waits for the IP addresses to be ready and discovers them automatically. An example deployment configuration is shown below:
```text
architecture=replicaset
replicaCount=2
externalAccess.enabled=true
externalAccess.service.type=LoadBalancer
externalAccess.service.port=27017
externalAccess.autoDiscovery.enabled=true
serviceAccount.create=true
rbac.create=true
```
> NOTE: This option requires creating RBAC rules on clusters where RBAC policies are enabled.
- Manually specify the load balancer IP addresses. An example deployment configuration is shown below, with the placeholder EXTERNAL-IP-ADDRESS-X used in place of the load balancer IP addresses:
```text
architecture=replicaset
replicaCount=2
externalAccess.enabled=true
externalAccess.service.type=LoadBalancer
externalAccess.service.port=27017
externalAccess.service.loadBalancerIPs[0]='EXTERNAL-IP-ADDRESS-1'
externalAccess.service.loadBalancerIPs[1]='EXTERNAL-IP-ADDRESS-2'
```
> NOTE: This option requires knowing the load balancer IP addresses, so that each MongoDB® node's advertised hostname is configured with it.
#### Use NodePort services
Manually specify the node ports to use. An example deployment configuration is shown below, with the placeholder NODE-PORT-X used in place of the node ports:
```text
architecture=replicaset
replicaCount=2
externalAccess.enabled=true
externalAccess.service.type=NodePort
externalAccess.service.nodePorts[0]='NODE-PORT-1'
externalAccess.service.nodePorts[1]='NODE-PORT-2'
```
> NOTE: This option requires knowing the node ports that will be exposed, so each MongoDB® node's advertised hostname is configured with it.
The pod will try to get the external IP address of the node using the command `curl -s https://ipinfo.io/IP-ADDRESS` unless the `externalAccess.service.domain` parameter is set.
### Bootstrapping with an External Cluster
This chart is equipped with the ability to bring online a set of Pods that connect to an existing MongoDB(®) deployment that lies outside of Kubernetes. This effectively creates a hybrid MongoDB(®) Deployment where both Pods in Kubernetes and Instances such as Virtual Machines can partake in a single MongoDB(®) Deployment. This is helpful in situations where one may be migrating MongoDB(®) from Virtual Machines into Kubernetes, for example. To take advantage of this, use the following as an example configuration:
```yaml
externalAccess:
externalMaster:
enabled: true
host: external-mongodb-0.internal
```
:warning: To bootstrap MongoDB(®) with an external master that lies outside of Kubernetes, be sure to set up external access using any of the suggested methods in this chart to have connectivity between the MongoDB(®) members. :warning:
### Add extra environment variables
To add extra environment variables (useful for advanced operations like custom init scripts), use the `extraEnvVars` property.
```yaml
extraEnvVars:
- name: LOG_LEVEL
value: error
```
Alternatively, you can use a ConfigMap or a Secret with the environment variables. To do so, use the `extraEnvVarsCM` or the `extraEnvVarsSecret` properties.
### Use Sidecars and Init Containers
If additional containers are needed in the same pod (such as additional metrics or logging exporters), they can be defined using the `sidecars` config parameter.
```yaml
sidecars:
- name: your-image-name
image: your-image
imagePullPolicy: Always
ports:
- name: portname
containerPort: 1234
```
If these sidecars export extra ports, extra port definitions can be added using the `service.extraPorts` parameter (where available), as shown in the example below:
```yaml
service:
extraPorts:
- name: extraPort
port: 11311
targetPort: 11311
```
> NOTE: This Helm chart already includes sidecar containers for the Prometheus exporters (where applicable). These can be activated by adding the `--enable-metrics=true` parameter at deployment time. The `sidecars` parameter should therefore only be used for any extra sidecar containers.
If additional init containers are needed in the same pod, they can be defined using the `initContainers` parameter. Here is an example:
```yaml
initContainers:
- name: your-image-name
image: your-image
imagePullPolicy: Always
ports:
- name: portname
containerPort: 1234
```
Learn more about [sidecar containers](https://kubernetes.io/docs/concepts/workloads/pods/) and [init containers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/).
### Persistence
The [Bitnami MongoDB(®)](https://github.com/bitnami/containers/tree/main/bitnami/mongodb) image stores the MongoDB(®) data and configurations at the `/bitnami/mongodb` path of the container.
The chart mounts a [Persistent Volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) at this location. The volume is created using dynamic volume provisioning.
If you encounter errors when working with persistent volumes, refer to our [troubleshooting guide for persistent volumes](https://docs.bitnami.com/kubernetes/faq/troubleshooting/troubleshooting-persistence-volumes/).
### Backup and restore MongoDB(R) deployments
Two different approaches are available to back up and restore Bitnami MongoDB® Helm chart deployments on Kubernetes:
- Back up the data from the source deployment and restore it in a new deployment using MongoDB® built-in backup/restore tools.
- Back up the persistent volumes from the source deployment and attach them to a new deployment using Velero, a Kubernetes backup/restore tool.
#### Method 1: Backup and restore data using MongoDB® built-in tools
This method involves the following steps:
- Use the *mongodump* tool to create a snapshot of the data in the source cluster.
- Create a new MongoDB® Cluster deployment and forward the MongoDB® Cluster service port for the new deployment.
- Restore the data using the *mongorestore* tool to import the backup to the new cluster.
> NOTE: Under this approach, it is important to create the new deployment on the destination cluster using the same credentials as the original deployment on the source cluster.
#### Method 2: Back up and restore persistent data volumes
This method involves copying the persistent data volumes for the MongoDB® nodes and reusing them in a new deployment with [Velero](https://velero.io/), an open source Kubernetes backup/restore tool. This method is only suitable when:
- The Kubernetes provider is [supported by Velero](https://velero.io/docs/latest/supported-providers/).
- Both clusters are on the same Kubernetes provider, as this is a requirement of [Velero's native support for migrating persistent volumes](https://velero.io/docs/latest/migration-case/).
- The restored deployment on the destination cluster will have the same name, namespace, topology and credentials as the original deployment on the source cluster.
This method involves the following steps:
- Install Velero on the source and destination clusters.
- Use Velero to back up the PersistentVolumes (PVs) used by the deployment on the source cluster.
- Use Velero to restore the backed-up PVs on the destination cluster.
- Create a new deployment on the destination cluster with the same chart, deployment name, credentials and other parameters as the original. This new deployment will use the restored PVs and hence the original data.
Refer to our detailed [tutorial on backing up and restoring MongoDB® chart deployments on Kubernetes](https://docs.bitnami.com/tutorials/backup-restore-data-mongodb-kubernetes/), which covers both these approaches, for more information.
### Use custom Prometheus rules
Custom Prometheus rules can be defined for the Prometheus Operator by using the `prometheusRule` parameter. A basic configuration example is shown below:
```text
metrics:
enabled: true
prometheusRule:
enabled: true
rules:
- name: rule1
rules:
- alert: HighRequestLatency
expr: job:request_latency_seconds:mean5m{job="myjob"} > 0.5
for: 10m
labels:
severity: page
annotations:
summary: High request latency
```
### Enable SSL/TLS
This chart supports enabling SSL/TLS between nodes in the cluster, as well as between MongoDB(®) clients and nodes, by setting the `MONGODB_EXTRA_FLAGS` and `MONGODB_CLIENT_EXTRA_FLAGS` container environment variables, together with the correct `MONGODB_ADVERTISED_HOSTNAME`. To enable full TLS encryption, set the `tls.enabled` parameter to `true`.
#### Generate the self-signed certificates via pre-install Helm hooks
The `secrets-ca.yaml` file utilizes the Helm "pre-install" hook to ensure that the certificates will only be generated on chart install.
The `genCA()` function will create a new self-signed x509 certificate authority. The `genSignedCert()` function creates an object with the certificate and key, which are base64-encoded and used in a YAML-like object. The `genSignedCert()` function is passed the CN, an empty IP list (the nil part), the validity and the CA created previously.
A Kubernetes Secret is used to hold the signed certificate created above, and the `initContainer` sets up the rest. Using Helm's hook annotations ensures that the certificates will only be generated on chart install. This will prevent overriding the certificates if the chart is upgraded.
#### Use your own CA
To use your own CA, set `tls.caCert` and `tls.caKey` with appropriate base64 encoded data. The `secrets-ca.yaml` file will utilize this data to create the Secret.
> NOTE: Currently, only RSA private keys are supported.
#### Access the cluster
To access the cluster, enable the init container which generates the MongoDB(®) server/client PEM key needed to access the cluster. Please be sure to include the `$my_hostname` section with your actual hostname, and the alternative hostnames section should contain the hostnames that should be allowed access to the MongoDB(®) replicaset. Additionally, if external access is enabled, the load balancer IP addresses are added to the alternative names list.
> NOTE: You will be generating self-signed certificates for the MongoDB(®) deployment. The init container generates a new MongoDB(®) private key which will be used to create a Certificate Authority (CA) and the public certificate for the CA. The Certificate Signing Request will be created as well and signed using the private key of the CA previously created. Finally, the PEM bundle will be created using the private key and public certificate. This process will be repeated for each node in the cluster.
#### Start the cluster
After the certificates have been generated and made available to the containers at the correct mount points, the MongoDB(®) server will be started with TLS enabled. The options for the TLS mode will be one of `disabled`, `allowTLS`, `preferTLS`, or `requireTLS`. This value can be changed via the `MONGODB_EXTRA_FLAGS` field using the `tlsMode` parameter. The client should now be able to connect to the TLS-enabled cluster with the provided certificates.
### Set Pod affinity
This chart allows you to set your custom affinity using the `XXX.affinity` parameter(s). Find more information about Pod affinity in the [Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity).
As an alternative, you can use the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the [bitnami/common](https://github.com/bitnami/charts/tree/main/bitnami/common#affinities) chart. To do so, set the `XXX.podAffinityPreset`, `XXX.podAntiAffinityPreset`, or `XXX.nodeAffinityPreset` parameters.
## Troubleshooting
Find more information about how to deal with common errors related to Bitnami's Helm charts in [this troubleshooting guide](https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues).
## Upgrading
If authentication is enabled, it's necessary to set the `auth.rootPassword` (also `auth.replicaSetKey` when using a replicaset architecture) when upgrading for readiness/liveness probes to work properly. When you install this chart for the first time, some notes will be displayed providing the credentials you must use under the 'Credentials' section. Please note down the password, and run the command below to upgrade your chart:
```console
helm upgrade my-release oci://REGISTRY_NAME/REPOSITORY_NAME/mongodb --set auth.rootPassword=[PASSWORD] (--set auth.replicaSetKey=[REPLICASETKEY])
```
> Note: You need to substitute the placeholders `REGISTRY_NAME` and `REPOSITORY_NAME` with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use `REGISTRY_NAME=registry-1.docker.io` and `REPOSITORY_NAME=bitnamicharts`.
> Note: you need to substitute the placeholders [PASSWORD] and [REPLICASETKEY] with the values obtained in the installation notes.
### To 12.0.0
This major release renames several values in this chart and adds missing features, in order to be inline with the rest of assets in the Bitnami charts repository.
Affected values:
- `strategyType` is replaced by `updateStrategy`
- `service.port` is renamed to `service.ports.mongodb`
- `service.nodePort` is renamed to `service.nodePorts.mongodb`
- `externalAccess.service.port` is renamed to `externalAccess.hidden.service.ports.mongodb`
- `rbac.role.rules` is renamed to `rbac.rules`
- `externalAccess.hidden.service.port` is renamed ot `externalAccess.hidden.service.ports.mongodb`
- `hidden.strategyType` is replaced by `hidden.updateStrategy`
- `metrics.serviceMonitor.relabellings` is renamed to `metrics.serviceMonitor.relabelings`(typo fixed)
- `metrics.serviceMonitor.additionalLabels` is renamed to `metrics.serviceMonitor.labels`
Additionally also updates the MongoDB image dependency to it newest major, 5.0
### To 11.0.0
In this version, the mongodb-exporter bundled as part of this Helm chart was updated to a new version which, even it is not a major change, can contain breaking changes (from `0.11.X` to `0.30.X`).
Please visit the release notes from the upstream project at <https://github.com/percona/mongodb_exporter/releases>
### To 10.0.0
[On November 13, 2020, Helm v2 support formally ended](https://github.com/helm/charts#status-of-the-project). This major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm v3 and to be consistent with the Helm project itself regarding the Helm v2 EOL.
### To 9.0.0
MongoDB(®) container images were updated to `4.4.x` and it can affect compatibility with older versions of MongoDB(®). Refer to the following guides to upgrade your applications:
- [Standalone](https://docs.mongodb.com/manual/release-notes/4.4-upgrade-standalone/)
- [Replica Set](https://docs.mongodb.com/manual/release-notes/4.4-upgrade-replica-set/)
### To 8.0.0
- Architecture used to configure MongoDB(®) as a replicaset was completely refactored. Now, both primary and secondary nodes are part of the same statefulset.
- Chart labels were adapted to follow the Helm charts best practices.
- This version introduces `bitnami/common`, a [library chart](https://helm.sh/docs/topics/library_charts/#helm) as a dependency. More documentation about this new utility could be found [here](https://github.com/bitnami/charts/tree/main/bitnami/common#bitnami-common-library-chart). Please, make sure that you have updated the chart dependencies before executing any upgrade.
- Several parameters were renamed or disappeared in favor of new ones on this major version. These are the most important ones:
- `replicas` is renamed to `replicaCount`.
- Authentication parameters are reorganized under the `auth.*` parameter:
- `usePassword` is renamed to `auth.enabled`.
- `mongodbRootPassword`, `mongodbUsername`, `mongodbPassword`, `mongodbDatabase`, and `replicaSet.key` are now `auth.rootPassword`, `auth.username`, `auth.password`, `auth.database`, and `auth.replicaSetKey` respectively.
- `securityContext.*` is deprecated in favor of `podSecurityContext` and `containerSecurityContext`.
- Parameters prefixed with `mongodb` are renamed removing the prefix. E.g. `mongodbEnableIPv6` is renamed to `enableIPv6`.
- Parameters affecting Arbiter nodes are reorganized under the `arbiter.*` parameter.
Consequences:
- Backwards compatibility is not guaranteed. To upgrade to `8.0.0`, install a new release of the MongoDB(®) chart, and migrate your data by creating a backup of the database, and restoring it on the new release.
### To 7.0.0
From this version, the way of setting the ingress rules has changed. Instead of using `ingress.paths` and `ingress.hosts` as separate objects, you should now define the rules as objects inside the `ingress.hosts` value, for example:
```yaml
ingress:
hosts:
- name: mongodb.local
path: /
```
### To 6.0.0
From this version, `mongodbEnableIPv6` is set to `false` by default in order to work properly in most k8s clusters, if you want to use IPv6 support, you need to set this variable to `true` by adding `--set mongodbEnableIPv6=true` to your `helm` command.
You can find more information in the [`bitnami/mongodb` image README](https://github.com/bitnami/containers/tree/main/bitnami/mongodb#readme).
### To 5.0.0
When enabling replicaset configuration, backwards compatibility is not guaranteed unless you modify the labels used on the chart's statefulsets.
Use the workaround below to upgrade from versions previous to 5.0.0. The following example assumes that the release name is `my-release`:
```console
kubectl delete statefulset my-release-mongodb-arbiter my-release-mongodb-primary my-release-mongodb-secondary --cascade=false
```
### Add extra deployment options
To add extra deployments (useful for advanced features like sidecars), use the `extraDeploy` property.
In the example below, you can find how to use a example here for a [MongoDB replica set pod labeler sidecar](https://github.com/combor/k8s-mongo-labeler-sidecar) to identify the primary pod and dynamically label it as the primary node:
```yaml
extraDeploy:
- apiVersion: v1
kind: Service
metadata:
name: mongodb-primary
namespace: default
labels:
app.kubernetes.io/component: mongodb
app.kubernetes.io/instance: mongodb
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: mongodb
spec:
type: NodePort
externalTrafficPolicy: Cluster
ports:
- name: mongodb-primary
port: 30001
nodePort: 30001
protocol: TCP
targetPort: mongodb
selector:
app.kubernetes.io/component: mongodb
app.kubernetes.io/instance: mongodb
app.kubernetes.io/name: mongodb
primary: "true"
```
## License
Copyright © 2024 Broadcom. The term "Broadcom" refers to Broadcom Inc. and/or its subsidiaries.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
<https://www.apache.org/licenses/LICENSE-2.0>
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
|