/
automated-backup.html.md.erb
993 lines (816 loc) · 38 KB
/
automated-backup.html.md.erb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
---
title: Configuring automated backups
owner: MySQL
---
<strong><%= modified_date %></strong>
You can configure automated, physical backups for <%= vars.product_full %>.
Developers can create physical backups using the Cloud Foundry Command Line Interface (cf CLI)
or logical backups using `mysqldump`.
For more information about physical backups, see
[Backing Up and Restoring <%= vars.product_full %>](./backup-restore.html).
For more information about logical backups, see
[Backing Up and Restoring with mysqldump](./backup-mysqldump.html).
You can restore a physical backup by following the procedures in
[Restore a Service Instance](./backup-restore.html#restore).
## <a id="enable-backups"></a> About configuring automated backups
You can configure <%= vars.product_short %> to automatically back up databases to external storage.
<%= vars.product_short %> backs up the entire data directory for each service instance.
<%= vars.product_short %> backs up your database on a schedule.
You configure this schedule with a cron expression.
<p> Configuring a cron expression overrides the default schedule for your service instance.
<br><br>
Developers can override the default for their service instance.
For more information, see <a href="./change-default.html#backup-schedule">Backup Schedule</a>.
</p>
To configure backups, follow the procedure for your external storage solution:
+ [Back up using SCP](#scp)
+ [Back Up to Amazon S3 or Ceph](#ceph-or-s3)
+ [Back Up to Amazon S3 with Instance Profile](#instance-profile)
+ [Back Up to GCS](#gcs)
+ [Back Up to Azure Storage](#azure)
## <a id="scp"></a> Back up using SCP
Secure copy protocol (SCP) enables operators to use any storage solution
on the destination VM. This is the fastest method for backing up your database.
When you configure backups with SCP,
<%= vars.product_short %> runs an SCP command that uses SFTP to securely copy backups to a VM
or physical machine operating outside of your deployment.
The operator provisions the backup machine separately from their installation.
To back up your database using SCP:
+ [Create a Public and Private Key Pair](#scp-keys)
+ [Configure Backups in <%= vars.ops_manager %>](#configure-scp)
### <a id="scp-keys"></a> Create a public and private Key‑Pair
<%= vars.product_short %> accesses a remote host as a user with a private key for authentication.
VMware recommends that this user and key-pair is only used for <%= vars.product_short %>.
1. Determine the remote host that you use to store backups for <%= vars.product_short %>.
Ensure that the MySQL service instances can access the remote host.
<p> VMware recommends using a VM outside your deployment
for the destination of SCP backups. As a result,
you might need to enable public IPs for the MySQL VMs.
</p>
1. (Recommended) Create a new user for <%= vars.product_short %> on the destination VM.
1. (Recommended) Create a new public and private key-pair for authenticating as the above user
on the destination VM.
### <a id="configure-scp"></a> Configure backups in <%= vars.ops_manager %>
Use <%= vars.ops_manager %> to configure <%= vars.product_short %> to back up using SCP.
1. In <%= vars.ops_manager %>, open the <%= vars.product_short %> tile **Backups** pane.
1. Select **SCP**.<br/>
![alt-text="Backup configuration pane shows SCP radio button selected and the fields
in the pane are described in the following table."](./images/scp-backups.png)
1. Configure the fields as follows:
<table>
<tr>
<th>Field</th>
<th>Instructions</th>
</tr>
<tr>
<td>
<strong>Username</strong>
</td>
<td>
Enter the user that you created in
<a href="#scp-keys">Create a Public and Private Key‑Pair</a>.
</td>
</tr>
<tr>
<td>
<strong>Private Key</strong>
</td>
<td>
Enter the private key that you created in
<a href="#scp-keys">Create a Public and Private Key‑Pair</a>. <br>
Store the public key that is used for SSH and SCP access on the destination VM.
</td>
</tr>
<tr>
<td>
<strong>Hostname</strong>
</td>
<td>
Enter the IP address or DNS entry that is used to access the destination VM.
</td>
</tr>
<tr>
<td>
<strong>Destination Directory</strong>
</td>
<td>
Enter the directory that <%= vars.product_short %> uploads backups to.
</td>
</tr>
<tr>
<td>
<strong>SCP Port</strong>
</td>
<td>
Enter the SCP port number for SSH.
This port usually is <code>22</code>.
</td>
</tr>
<tr>
<td>
<strong>Cron Schedule</strong>
</td>
<td>
Enter a cron expression using standard syntax.
The cron expression sets the schedule for taking backups for each service instance.
This overrides the default schedule for your service instance.
Test your cron expression using a website such as
<a href="https://crontab.guru/">Crontab Guru</a>.
<p> Developers can override the default for their service instance.
For more information, see <a href="./change-default.html#backup-schedule">Backup Schedule</a>.
</p>
</td>
</tr>
<tr>
<td>
<strong>Fingerprint</strong>
</td>
<td>
(Optional) Enter the fingerprint for the destination VM public key.
The fingerprint detects any changes to the destination VM.
</td>
</tr>
</table>
## <a id="ceph-or-s3"></a> Back Up to Amazon S3 or Ceph
When you configure backups for Amazon S3 or Ceph,
<%= vars.product_short %> runs an Amazon S3 client that saves the backups to one of the following:
+ an Amazon S3 bucket
+ a Ceph storage cluster
+ another S3-compatible endpoint certified by VMware
For information about Amazon S3 buckets,
see the [Amazon documentation](https://aws.amazon.com/documentation/s3/).
For information about Ceph storage clusters,
see the [Ceph documentation](https://docs.ceph.com/en/latest/).
To back up your database to Amazon S3 or Ceph:
+ [Create a custom policy and access key](#access-key-aws)
+ [Configure backups in <%= vars.ops_manager %>](#configure-aws)
### <a id="access-key-aws"></a> Create a Ccstom policy and access key
<%= vars.product_short %> accesses your S3 bucket through a user account.
VMware recommends that this account be only used by <%= vars.product_short %>.
You must apply a minimal policy that enables the user account upload backups to your S3 bucket. Then give the policy the
permission to list and upload to buckets.
The procedure in this section assumes that you are using an Amazon S3 bucket.
If you are using a Ceph or another S3-compatible bucket to create the policy and access key,
follow the documentation for your storage solution.
For more information about Ceph S3 bucket policies,
see the [Ceph documentation](https://docs.ceph.com/en/latest/radosgw/bucketpolicy/).
To create a policy and access key in Amazon Web Services (AWS):
1. Create a policy for your <%= vars.product_short %> user account. <br><br>
In AWS, create a new custom policy by following this procedure in the
[AWS documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create-console.html#access_policies_create-json-editor). <br>
Paste in the following permissions:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "MySQLBackupPolicy",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:ListMultipartUploadParts",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::MY_BUCKET_NAME/*",
"arn:aws:s3:::MY_BUCKET_NAME"
]
}
]
}
```
1. Record the Access Key ID and Secret Access Key user credentials for a new user account by
following this procedure in
the [AWS documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console).
Ensure you select **Programmatic access**
and **Attach existing policies to user directly**. You must attach the policy you created in
the previous step.
### <a id="configure-aws"></a> Configure backups in <%= vars.ops_manager %>
Use <%= vars.ops_manager %> to connect <%= vars.product_short %> to your S3 account.
**Prerequisite:** Before beginning this procedure,
you must have an S3 bucket in which to store the backups.
1. In <%= vars.ops_manager %>, open the <%= vars.product_short %> tile **Backups** pane.
1. Select **Ceph or Amazon S3**.
![alt-text="Backup configuration pane shows Ceph or Amazon S3 selected and the fields
in the pane are described in the following table."](./images/S3-backups.png)
1. Configure the fields as follows:
<table>
<tr>
<th>Field</th>
<th>Instructions</th>
</tr>
<tr>
<td>
<strong>Access Key ID</strong> and <strong>Secret Access Key</strong>
</td>
<td>
Enter the S3 Access Key ID and Secret Access Key that you created in
<a href="#access-key-aws">Create a Custom Policy and Access Key</a>.
</td>
</tr>
<tr>
<td>
<strong>Endpoint URL</strong>
</td>
<td>
Enter the S3-compatible endpoint URL for uploading backups. <br>
The URL must start with <code>http://</code> or <code>https://</code>. <br>
The default is <code>https://s3.amazonaws.com</code>.<br>
If you are using a public S3 endpoint, see the S3 Endpoint procedure in
<a href="https://docs.pivotal.io/ops-manager/aws/config-manual.html#pcfaws-om-dirconfig">Step 3: Director Config Page</a>.
</td>
</tr>
<tr>
<td>
<strong>Region</strong>
</td>
<td>
Enter the region where your bucket is located.
</td>
</tr>
<tr>
<td>
<strong>Bucket Name</strong>
</td>
<td>
Enter the name of your bucket. <br>
Do not include an <code>s3://</code> prefix, a trailing <code>/</code>, or underscores.
VMware recommends using the naming convention <code>DEPLOYMENT-backups</code>.
For example, <code>sandbox-backups</code>.
</td>
</tr>
<tr>
<td>
<strong>Force path style access to bucket</strong>
</td>
<td>
The default behavior in <%= vars.product_short %> 2.9 and later uses a virtual-style URL.<br>
Select this check box if you use:
<ul>
<li>
Amazon S3 and your bucket name is not compatible with virtual hosted-style URLs.
</li>
<li>
An S3-compatible endpoint such as Minio that might require path-style URLs.
</li>
</ul>
<p> If you are using a blobstore that uses a specific set of domains in its
server certificate, add a new wildcard domain or use path-style URLs if supported by the
blobstore.
</p>
For general information about the deprecation of S3 path-style URLs, see AWS blog posts:
<a href="https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/">Amazon S3 Path Deprecation Plan – The Rest of the Story</a>
and the subsequent
<a href="https://aws.amazon.com/blogs/storage/update-to-amazon-s3-path-deprecation-plan/">Update to Amazon S3 Path Deprecation Plan</a>.<br><br>
</td>
</tr>
<tr>
<td>
<strong>Bucket Path</strong>
</td>
<td>
(Optional) Enter the path in the bucket to store backups.<br>
You can use this to keep the backups from this foundation separate from those of other
foundations that might also backup to this bucket.
For example, <code>Foundation-1</code>.
This field is only available as of v2.10.3.
</td>
</tr>
<tr>
<td>
<strong>Cron Schedule</strong>
</td>
<td>
Enter a cron expression using standard syntax.
The cron expression sets the schedule for taking backups for each service instance.
This overrides the default schedule for your service instance.
Test your cron expression using a website such as
<a href="https://crontab.guru/">Crontab Guru</a>.
<p> Developers can override the default for their service instance.
For more information, see <a href="./change-default.html#backup-schedule">Backup Schedule</a>.
</p>
</td>
</tr>
</table>
## <a id="instance-profile"></a> Back up to Amazon S3 with instance profile
<p class="note important">
<span class="note__title"> Important</span> Configuring this backup method requires operators to run the
<code>upgrade-all-service-instances</code> errand during <strong>Apply Changes</strong>.</p>
<br><br>
Backups fail until the service instance is upgraded.
When you configure backups for Amazon S3 with Instance Profile, <%= vars.product_short %>
allows the Identity and Access Management (IAM) user or role used by BOSH to pass the
new backups IAM role to a new EC2 instance.
You can use the procedure in this section to allow <%= vars.product_short %> to upload backups to
Amazon S3 without static credentials (Access and Secret Access Key ID).
**Prerequisite:** You must be running <%= vars.app_runtime_full %> (<%= vars.app_runtime_abbr %>) on AWS.
The process for configuring backups for an Amazon S3 with Instance Profile is:
1. [Create an IAM Role with a custom policy](#ip-create-policy)
1. [Add a policy to the existing <%= vars.ops_manager %> user or role](#ip-add-policy)
1. [Configure a VM Extension](#ip-configure-vm)
1. [Create a VM Extension in <%= vars.ops_manager %>](#ip-vm-ops)
1. [Apply the VM Extension to the dedicated-mysql-broker Job](#ip-vm-job)
1. [Set the VM Extension Name](#ip-vm-tile)
1. [Apply changes and upgrade all service instances](#ip-apply-upgrade)
1. [(Optional) Verify the IAM Role is associated with MySQL service instances](#ip-verify-instances)
### <a id="ip-create-policy"></a> Create an IAM role with a custom policy
First, you must create a policy for your <%= vars.product_short %> user account.
For more information about AWS identity and access management, see the
[AWS documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html).<br>
For more information about users, groups, and roles in AWS, see the
[AWS documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id.html).
To create an IAM Role with a custom policy:
1. In AWS, create an IAM role with a new custom policy by following this procedure in the
[AWS documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create-console.html#access_policies_create-json-editor).
<br><br>
Paste in the following permissions:
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket",
"s3:ListBucketVersions",
"s3:ListObjects"
],
"Resource": [
"arn:aws:s3:::BUCKET-NAME",
"arn:aws:s3:::BUCKET-NAME/*"
]
}
]
}
```
Where `BUCKET-NAME` is the name of the bucket.
1. Record the Amazon Resource Name (ARN) of this new IAM role.
This is used in [Add a Policy to the Existing <%= vars.ops_manager %> User or Role](#ip-add-policy).
1. Record the name of the Instance Profile associated with this new IAM role.
This is used in [Create a VM Extension in <%= vars.ops_manager %>](#ip-vm-ops).
### <a id="ip-add-policy"></a> Add a Policy to the existing <%= vars.ops_manager %> user or role
Next, you must add a new policy to the existing <%= vars.ops_manager %> IAM user or role that
is configured in the **AWS Config** pane of the _BOSH Director for AWS_ tile.
This policy allows the IAM user or role used by BOSH to pass the new backups IAM role to a new EC2 instance.
Depending on your configuration, this is either a user or a role.
To find the existing user or role and add a policy:
1. Log into <%= vars.ops_manager %>. To log in, see [Log In to <%= vars.ops_manager %> For the First Time](https://docs.pivotal.io/ops-manager/login.html).
1. Click the **BOSH Director for AWS** tile.
1. Select **AWS Config**.
<br><br>
The following tabs expand to show instructions depending on the type of **AWS Config** that is already configured: <br><br>
<style>
.btn.btn-default,
.tab .tablinks {
color: #2185c5;
}
.tab {
overflow: hidden;
border: 1px solid #ccc;
background-color: #f1f1f1;
}
/* Style the buttons that are used to open the tab content */
.tab button {
background-color: inherit;
float: left;
border: none;
outline: none;
cursor: pointer;
padding: ;
transition: 0.3s;
}
/* Change background color of buttons on hover */
.tab button:hover {
background-color: #ddd;
}
/* Create an active/current tablink class */
.tab button.active {
background-color: #ccc;
}
/* Style the tab content */
.tabcontent {
display: none;
padding: 6px 12px;
border: 1px solid #ccc;
border-top: none;
}
</style>
<script>
function openDocs(evt, docsName) {
// Declare all variables
var i, tabcontent, tablinks;
// Get all elements with class="tabcontent" and hide them
tabcontent = document.getElementsByClassName("tabcontent");
for (i = 0; i < tabcontent.length; i++) {
tabcontent[i].style.display = "none";
}
// Get all elements with class="tablinks" and remove the class "active"
tablinks = document.getElementsByClassName("tablinks");
for (i = 0; i < tablinks.length; i++) {
tablinks[i].className = tablinks[i].className.replace(" active", "");
}
// Show the current tab, and add an "active" class to the button that opened the tab
document.getElementById(docsName).style.display = "block";
evt.currentTarget.className += " active";
}
</script>
<div class="tab">
<!- Tab headers and links ->
<button class="tablinks" onclick="openDocs(event, 'tab1')">Use AWS Keys</button>
<button class="tablinks active" onclick="openDocs(event, 'tab2')">Use AWS Instance Profile</button>
</div>
<div id="tab1" class="tabcontent">
<p>
<ul>
<li>You must find the existing IAM user associated with the static credentials that are used here.
The name of the IAM user is <i>not</i> listed here in the <strong>BOSH Director for AWS</strong> tile UI.
<br><br>
To find retrieve your AWS Key information and find the existing IAM user,
use the AWS Identity and Access Management (IAM) credentials that you generated in
<a href="https://docs.pivotal.io/ops-manager/aws/prepare-env-manual.html#create-iam">
Step 3: Create an IAM User for <%= vars.ops_manager %></a>
in <em>Preparing to Deploy <%= vars.ops_manager %> on AWS Manually</em>.
</li>
</ul>
</p>
< ![alt-text="The AWS Management Console Config pane in Ops Manager. The Use AWS Keys radio
button is selected."](./images/aws-config-aws-keys.png)/>
</div>
<div id="tab2" class="tabcontent" style="display:block">
<p>
<ul>
<li>Find the name of the existing IAM role in the <strong>AWS IAM Instance Profile</strong> field.
<br><br>
For more information about this role, see
<a href="https://docs.pivotal.io/ops-manager/aws/prepare-env-manual.html#create-iam">Step 3: Create an IAM User for <%= vars.ops_manager %></a>
in <em>Preparing to Deploy <%= vars.ops_manager %> on AWS Manually</em>.
</ul>
</p>
![alt-text="The AWS Management Console Config pane in Ops Manager. The Use AWS Instance Profile radio
button is selected."](./images/aws-config-instance-profile.png)/>
</div>
1. On the AWS Management Console, add a new policy to that IAM User or Role:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowToCreateInstanceWithMySQLBackupstInstanceProfile",
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": [
"arn:aws:iam::540390724081:role/MYSQL-BACKUPS-ROLE"
]
}
]
}
```
Where `MYSQL-BACKUPS-ROLE` is the ARN of the role created in the previous section.
### <a id="ip-configure-vm"></a> Configure a VM Extension
#### <a id="ip-vm-ops"></a> Create a VM Extension in <%= vars.ops_manager %>
There are two methods that you can use to create a VM Extension in <%= vars.ops_manager %>:
* **Using the <%= vars.ops_manager %> API directly.**
For more information, see
[Create or Update a VM Extension](https://docs.pivotal.io/ops-manager/install/custom-vm-extensions.html#create-vm-extension)
in the <%= vars.ops_manager %> documentation.
* **Using the <%= vars.ops_manager %> CLI (om) to create the VM extension.**
For information about `create-vm-extension`, see
[om create-vm-extension](https://github.com/pivotal-cf/om/blob/main/docs/create-vm-extension/README.md) in GitHub.
JSON example to specify the instance profile name:
```json
{
"name": "VM-EXTENSION-NAME",
"cloud_properties": {
"iam_instance_profile": "INSTANCE-PROFILE-NAME"
}
}
```
Where:
* `VM-EXTENSION-NAME` is the unique VM extension name that <%= vars.ops_manager %> manages
* `INSTANCE-PROFILE-NAME` is the name of the instance profile created in
[Create an IAM Role with a Custom Policy](#ip-create-policy).
#### <a id="ip-vm-job"></a> Apply the VM Extension to the dedicated-mysql-broker job
These are the following methods that you can use to apply the VM extension to the `dedicated-mysql-broker`
job in the <%= vars.product_short %> tile:
* **Using the <%= vars.ops_manager %> API directly.** For more information, see
[Apply VM Extensions to a Job](https://docs.pivotal.io/ops-manager/install/custom-vm-extensions.html#apply-vm-extensions)
in the <%= vars.ops_manager %> documentation.
* **Using the om CLI to configure the tile.** Add the `additional_vm_extensions`
key in the `resource-config` section of the product configuration and use the om CLI.
For information about configuring using a YAML configuration file, see
[om configure-product](https://github.com/pivotal-cf/om/tree/main/docs/configure-product#configuring-via-yaml-config-file)
in GitHub.
#### <a id="ip-vm-tile"></a> Set the VM Extension name
Now that you have created and applied the VM extension, you must set it in the <%= vars.product_short %> tile.
To set the VM extension name:
1. Log into <%= vars.ops_manager %>. To log in, see [Log In to <%= vars.ops_manager %> For the First Time](https://docs.pivotal.io/ops-manager/login.html).
1. Click the **<%= vars.product_short %>** tile.
1. Select **Backups**.
1. Select **Amazon S3 (with Instance Profiles)**
![alt-text="Shows Amazon S3 (with Instance Profiles) selected. The remaining fields in the pane are
described in the following table."](./images/S3-instance-profiles.png)
1. Configure the fields as follows:
<table>
<tr>
<th>Field</th>
<th>Instructions</th>
</tr>
<tr>
<td>
<strong>Instance Profile VM Extension Name</strong>
</td>
<td>
Enter the <code>VM-EXTENSION-NAME</code> that you created in
<a href="#ip-vm-ops">Create a VM Extension in <%= vars.ops_manager %></a>.
</td>
</tr>
<tr>
<td>
<strong>Endpoint URL</strong>
</td>
<td>
Enter the S3-compatible endpoint URL for uploading backups. <br>
The URL must start with <code>http://</code> or <code>https://</code>. <br>
The default is <code>https://s3.amazonaws.com</code>.<br>
If you are using a public S3 endpoint, see the S3 Endpoint procedure in
<a href="https://docs.pivotal.io/ops-manager/aws/config-manual.html#pcfaws-om-dirconfig">
Step 3: Director Config Page</a> in <em>Configuring BOSH Director on AWS</em>.
</td>
</tr>
<tr>
<td>
<strong>Region</strong>
</td>
<td>
Enter the region where your bucket is located.
</td>
</tr>
<tr>
<td>
<strong>Bucket Name</strong>
</td>
<td>
Enter the name of your bucket. <br>
Do not include an <code>s3://</code> prefix, a trailing <code>/</code>, or underscores.
VMware recommends using the naming convention <code>DEPLOYMENT-backups</code>.
For example, <code>sandbox-backups</code>.
</td>
</tr>
<tr>
<td>
<strong>Force path style access to bucket</strong>
</td>
<td>
The default behavior in <%= vars.product_short %> 2.9 and later uses a virtual-style URL.<br>
Select this check box if you use:
<ul>
<li>
Amazon S3 and your bucket name is not compatible with virtual hosted-style URLs.
</li>
<li>
An S3-compatible endpoint such as Minio that might require path-style URLs.
</li>
</ul>
<p> If you are using a blobstore that uses a specific set of domains in its
server certificate, add a new wildcard domain or use path-style URLs if supported by the
blobstore.</p>
For general information about the deprecation of S3 path-style URLs, see AWS blog posts:
<a href="https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/">Amazon S3 Path Deprecation Plan – The Rest of the Story</a>
and the subsequent
<a href="https://aws.amazon.com/blogs/storage/update-to-amazon-s3-path-deprecation-plan/">Update to Amazon S3 Path Deprecation Plan</a>.<br><br>
</td>
</tr>
<tr>
<td>
<strong>Bucket Path</strong>
</td>
<td>
(Optional) Enter the path in the bucket to store backups.<br>
You can use this to keep the backups from this foundation separate from those of other
foundations that might also backup to this bucket.
For example, <code>Foundation-1</code>.
This field is only available as of v2.10.3.
</td>
</tr>
<tr>
<td>
<strong>Cron Schedule</strong>
</td>
<td>
Enter a cron expression using standard syntax.
The cron expression sets the schedule for taking backups for each service instance.
This overrides the default schedule for your service instance.
Test your cron expression using a website such as
<a href="https://crontab.guru/">Crontab Guru</a>.
<p> Developers can override the default for their service instance.
For more information, see <a href="./change-default.html#backup-schedule">Backup Schedule</a>.
</p>
</td>
</tr>
</table>
1. Click **Save**.
### <a id="ip-apply-upgrade"></a> Apply Ccanges and upgrade all service instances
The changes to your service instances are not complete until you apply your configuration changes.
This allows the service instances to begin using the instance profile instead of
static credentials for backup and restore.
Static credentials are not provided to existing service instances and backups fail
until you upgrade the service instances.
To apply changes and upgrade all service instances for <%= vars.product_short %>:
1. Return to the <%= vars.ops_manager %> Installation Dashboard.
1. Click **Review Pending Changes**.
1. Deselect the check boxes for all products except **BOSH Director** and
**<%= vars.product_full %>**.
1. Verify that the check box for the <code>Upgrade all On-demand MySQL Service Instances</code> errand is activated.
1. Click **Apply Changes**.
### <a id="ip-verify-instances"></a> (Optional) Verify the IAM role is associated with MySQL service instances
To verify that the IAM role is associated with the MySQL service instances:
1. On the AWS Management Console, find any EC2 instance that begins with `mysql/GUID`.
1. Verify that the IAM Role is present in the details for the instance.
## <a id="gcs"></a> Back up to GCS
When you configure backups for a Google Cloud Storage (GCS) bucket,
<%= vars.product_short %> runs a GCS SDK that saves backups to a GCS bucket.
For information about GCS buckets,
see the [GCS documentation](https://cloud.google.com/storage/).
To back up your database to Google Cloud Storage (GCS):
+ [Create a service account and private key](#service-account-gcs)
+ [Configure backups in <%= vars.ops_manager %>](#configure-gcs)
### <a id="service-account-gcs"></a> Create a service account and private key
<%= vars.product_short %> accesses your GCS bucket through a service account.
VMware recommends that this account is only used by <%= vars.product_short %>.
You must apply a minimal policy that enables the service account to upload backups to your GCS bucket.
The service account needs the following permissions:
* List and upload to buckets
* (Optional) Create buckets if they do not already exist
To create a service account and private key in GCS:
1. Create a new service account by following this procedure in
the [GCS documentation](https://cloud.google.com/iam/docs/creating-managing-service-accounts#creating_a_service_account). <br>
When you create the service account:
1. Enter a unique name for the service account name.
1. Add the **Storage Admin** role.
1. Create and download a private key JSON file.
### <a id="configure-gcs"></a> Configure backups in <%= vars.ops_manager %>
Use <%= vars.ops_manager %> to connect <%= vars.product_short %> to your GCS account.
1. In <%= vars.ops_manager %>, open the <%= vars.product_short %> tile **Backups** pane.
1. Select **GCS**.
![alt-text="Shows GCS radio button selected and the fields
in the pane are described in the following table."](./images/gcs-backups.png)
1. Configure the fields as follows:
<table>
<tr>
<th>Field</th>
<th>Instructions</th>
</tr>
<tr>
<td>
<strong>Project ID</strong>
</td>
<td>
Enter the Project ID for the Google Cloud project that you are using.
</td>
</tr>
<tr>
<td>
<strong>Bucket name</strong>
</td>
<td>
Enter the bucket name that <%= vars.product_short %> uploads backups to.
</td>
</tr>
<tr>
<td>
<strong>Bucket Path</strong>
</td>
<td>
(Optional) Enter the path in the bucket to store backups.<br>
You can use this to keep the backups from this foundation separate from those of other
foundations that might also backup to this bucket.
For example, <code>Foundation-1</code>.
This field is only available as of v2.10.3.
</td>
</tr>
<tr>
<td>
<strong>Service Account JSON</strong>
</td>
<td>
Enter the contents of the service account JSON file that you downloaded
when creating a service account in
<a href="#service-account-gcs">Create a Service Account and Private Key</a>.
</td>
</tr>
<tr>
<td>
<strong>Cron Schedule</strong>
</td>
<td>
Enter a cron expression using standard syntax.
The cron expression sets the schedule for taking backups for each service instance.
This overrides the default schedule for your service instance.
Test your cron expression using a website such as
<a href="https://crontab.guru/">Crontab Guru</a>.
<p> Developers can override the default for their service instance.
For more information, see <a href="./change-default.html#backup-schedule">Backup Schedule</a>.
</p>
</td>
</tr>
</table>
## <a id="azure"></a> Back Up to Azure storage
When you configure backups for Azure Storage,
<%= vars.product_short %> runs an Azure SDK that saves backups to an Azure storage account.
For information about Azure Storage,
see the [Azure documentation](https://docs.microsoft.com/en-us/azure/storage/).
To back up your database to Azure Storage:
+ [Create a storage account and access key](#storage-account-azure)
+ [Configure backups in <%= vars.ops_manager %>](#configure-azure)
### <a id="storage-account-azure"></a> Create a storage account and access key
<%= vars.product_short %> accesses your Azure Storage account through a storage access key.
VMware recommends that this account be only used by <%= vars.product_short %>.
You must apply a minimal policy that enables the storage account upload backups to your Azure Storage.
The storage account needs the following permissions:
* List and upload to buckets
* (Optional) Create buckets if they do not already exist
To create a storage account and access key:
1. Create a new storage account by following this procedure in
the [Azure documentation](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal#create-a-storage-account).
1. View your access key by following this procedure in
the [Azure documentation](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage#view-access-keys-and-connection-string)
### <a id="configure-azure"></a> Configure backups in <%= vars.ops_manager %>
To back up your database to your Azure Storage account:
1. In <%= vars.ops_manager %>, open the <%= vars.product_short %> tile **Backups** pane.
1. Select **Azure**.
<![alt-text="Backup configuration pane shows Azure radio button selected and the fields
in the pane are described in the following table."](./images/azure-backups.png)/>
1. Configure the fields as follows:
<table>
<tr>
<th>Field</th>
<th>Instructions</th>
</tr>
<tr>
<td>
<strong>Account</strong>
</td>
<td>
Enter the Azure Storage account name that you created in
<a href="#storage-account-azure">Create a Storage Account and Access Key</a>.
</td>
</tr>
<tr>
<td>
<strong>Azure Storage Access Key</strong>
</td>
<td>
Enter one of the storage access keys that you viewed in
<a href="#storage-account-azure">Create a Storage Account and Access Key</a>.
</td>
</tr>
<tr>
<td>
<strong>Container Name</strong>
</td>
<td>
Enter the container name that <%= vars.product_short %> uploads backups to.
</td>
</tr>
<tr>
<td>
<strong>Blob Store Base URL</strong>
</td>
<td>
To use an on-premise blob storage, enter the hostname of the blob storage.
By default, backups are sent to the public Azure blob storage.
The Blob Store Base URL must follow the format:
<code>my-storage-account.my-custom.domain/MY-CONTAINER-NAME</code>.
</td>
</tr>
<tr>
<td>
<strong>Bucket Path</strong>
</td>
<td>
(Optional) Enter the path in the bucket to store backups.<br>
You can use this to keep the backups from this foundation separate from those of other
foundations that might also backup to this bucket.
For example, <code>Foundation-1</code>.
This field is only available as of v2.10.3.
</td>
</tr>
<tr>
<td>
<strong>Cron Schedule</strong>
</td>
<td>
Enter a cron expression using standard syntax.
The cron expression sets the schedule for taking backups for each service instance.
This overrides the default schedule for your service instance.
Test your cron expression using a website such as
<a href="https://crontab.guru/">Crontab Guru</a>.
<p> Developers can override the default for their service instance.
For more information, see <a href="./change-default.html#backup-schedule">Backup Schedule</a>.
</p>
</td>
</tr>
</table>