I generate full-size backups of my database every night, but they just collect dust on my production server taking up space.
Not only that, but if I lost the server, the backups would be gone to, so I wanted to investigate some options.

I have recently moved my servers and database to Amazon Web Services and really enjoying the value and flexibility of the platform. So I settled on Amazon Simple Storage Service (S3) to keep everything together. After downloading the SDK from their website, I decided to make a console application I could include in the backup script that is run. I couldn’t believe how simple it was to get going.

1
2
3
4
5
6
7
8
9
10
private static void UploadFile(FileSystemInfo file, string bucket, string prefix) {
        var s3Client = AWSClientFactory.CreateAmazonS3Client();
        var putObjectRequest = new PutObjectRequest {
                BucketName = bucket,
                FilePath = file.FullName,
                Key = prefix + file.Name
        };
        var result = s3Client.PutObject(putObjectRequest);
        Console.WriteLine("Successfully uploaded {0}. Request id: {1}", file.Name, result.RequestId);
}

You also need to let the application know of your AWSAccessKey and AWSSecretKey, you can do that in two ways. Firstly pass them into the constructor of the AWSClientFactory or you can use the App.config.

1
2
3
4
<appSettings>
	<add key="AWSAccessKey" value="ABCDEABCDEABCDEABCDE"/>
	<add key="AWSSecretKey" value="ABCDEABCDEABCDEABCDEABCDE/ABCDEABCDEABCDE"/>
</appSettings>

That is essentially it, you pass a FileInfo object of the file, the name of the bucket (I created mine in AWS Management Console) and an optional prefix. The reason I added the prefix is because I wanted to store the backups in a folder, not in the root of the buckets, so for example I use db-backups/. Some other helpful attributes to include into the request object are:

1
2
3
4
//Change the storage class to a cheaper option.
StorageClass = S3StorageClass.ReducedRedundancy,
//Increase the timeout to an hour, helpful for large files.
Timeout = 60 * 60 * 1000

Finally, to turn it into a command line application, I needed to add a static main method.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
public static void Main(string[] args) {
        if (args.Length < 2) {
                Console.WriteLine("You must supply a file to upload as the 1st argument and the bucket as the 2nd");
                Console.ReadKey();
                return;
        }
 
        var bucket = args[1];
        var prefix = args.Length > 2 ? args[2] : "";
        var file = new FileInfo(args[0]);
        if (!file.Exists) {
                Console.WriteLine("{0} is not a real file. You must supply a file to upload as the 1st argument ", args[0]);
                Console.ReadKey();
                return;
        }
        UploadFile(file, bucket, prefix);
}

Pretty standard code, you run it with a file as the first parameter and a bucket as the second. You can also pass in an optional prefix parameter.
I also wanted the ability to include all files in a folder, such as s3.exe c:\backups\* bucket-name so I added this before the var file line.

10
11
12
13
14
15
16
        if (args[0].EndsWith("*")) {
                var di = new DirectoryInfo(args[0].TrimEnd('*'));
                foreach (var wildcardFile in di.EnumerateFiles()) {
                    UploadFile(wildcardFile, bucket, prefix);
                }
                return;
        }

And that’s all folks. I know there are clients you can download to do this, but I think I enjoy understanding how it works by doing it myself. Also this is very easy to extend into a more versatile S3/AWS command line interface so I may post future updates later.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
using System;
using System.IO;
 
using Amazon;
using Amazon.S3.Model;
 
namespace S3BucketUpload {
    class Program {
        public static void Main(string[] args) {
            if (args.Length < 2) {
                Console.WriteLine("You must supply a file to upload as the 1st argument and the bucket as the 2nd");
                Console.ReadKey();
                return;
            }
 
            var bucket = args[1];
            var prefix = args.Length > 2 ? args[2] : "";
 
            if (args[0].EndsWith("*")) {
                var di = new DirectoryInfo(args[0].TrimEnd('*'));
                foreach (var wildcardFile in di.EnumerateFiles()) {
                    UploadFile(wildcardFile, bucket, prefix);
                }
                return;
            }
            var file = new FileInfo(args[0]);
            if (!file.Exists) {
                Console.WriteLine("{0} is not a real file. You must supply a file to upload as the 1st argument", args[0]);
                Console.ReadKey();
                return;
            }
            UploadFile(file, bucket, prefix);
        }
 
        private static void UploadFile(FileSystemInfo file, string bucket, string prefix) {
            var s3Client = AWSClientFactory.CreateAmazonS3Client();
            var putObjectRequest = new PutObjectRequest {
                BucketName = bucket,
                FilePath = file.FullName,
                Key = prefix + file.Name,
                StorageClass = S3StorageClass.ReducedRedundancy,
                Timeout = 60 * 60 * 1000
            };
            var result = s3Client.PutObject(putObjectRequest);
            Console.WriteLine("Successfully uploaded {0}. Request id: {1}", file.Name, result.RequestId);
        }
    }
}

Leave a Reply


%d bloggers like this: