A better alternative to AWS Glacier

Post Reply

A better alternative to AWS Glacier

Post by paulchu » Sat Mar 14, 2015 1:26 am

Google Cloud Storage Nearline offers low price (US$0.01/month, same as AWS Glacier), while offers no-delay retrieval of your backup data (it takes only a few seconds, while AWS Glacier needs 4-5 hours) and maintains its high durability.

Please check Google blog: http://googlecloudplatform.blogspot.tw/ ... price.html.

You can start trying Nearline by creating a bucket with Nearline enabled on Google Cloud Console and using QNAP CloudBackup App to backup your data to that bucket. You can also enable Nearline for existed buckets. You now can also use PC utilities, like Cloudberry, to retrieve selected files in your backup.

Starting out
Posts: 17
Joined: Wed Oct 05, 2011 9:41 am

Re: A better alternative to AWS Glacier

Post by ksteele » Fri Mar 27, 2015 8:17 am

So I've been trying both this and Glacier. I got stung on fast retrieval costs on Glacier before knowing they were there.

It was difficult to establish the actual download rate, as the download would start after 4,8,16 hours and then complete in 2min. Unless you were watching in that 2min, the historical data rate is obviously averaged over the whole job, and doesn't change or update when the download starts. So meaningless to tell you what rate it came down at. Even a retrieval of 160gb which ran for 19 hours, showed a retrieval rate of 2.37MBps where it was actually likely 7.x MBps. For tiny downloads like 1gb, its in kb/s as it starts in 4/8 hours, then is done in 1 min.

Any chance the historical could show the average rate from when there is file movement? Or have two figures, one over the time period of the job, the other once restore commences?

For our purposes, with 3.xtb up there, we can retrieve 180gb per day without penalty, but only at a rate of 1.86MBps without penalty.

Google nearline we can retrieve straight away, but its rate limited to 4MBps per 1tb stored. So we would need 2Tb stored, and then 8MBps would more than cover our available down link which maxes at 7.46MBps. We plan to store more than that so all good.

Its not entirely clear, but I think I am reading it's an ingress/egress limit both ways. So I am presuming I am limited in up speed until I get more than 1tb uploaded (which I have not yet).

Glacier upload tend to go at full speed straight away. Google nearline seems to ramp up slowly over 15mins or so. After a 45min 3.5gb upload it only reached 1.35MBps. Whereas glacier on my link hit 2MBps (my link max)

I presume after you have 1tb in your account, maybe you get the 4MBps upload speed?

Someone let me know if they have more than 1tb up there and get faster uploads (or otherwise have got faster uploads).

1st question.

I wish to check is that google nearline says overwriting of data before 30 days is classed as early deletion with fees. I presume you handle this the same way as Glacier? Eg for glacier, when it appends, its actually consuming more storage, not overwriting, then you set it to delete after 3 months (in google nearline 31 days)?

2nd question.

For google cloud nearline ,you say you can restore / access files with another app like cloudberry. In my test, if I uploaded with cloudberry, then I can see the individual files up there, and restore them individually, no issue.

If I upload using qnap google cloud storage app v 1.1.330 and then try to use cloudberry to get to that bucket, I can see and open the bucket. All that is inside it is a single metadata.tar.gz file.

On the qnap, it says it transferred 3.54gb in 3 files. It also let me restore them to another folder on the qnap (although now it only seems to let me restore them to the original location). If I check my google web console, and browse the bucket, I only see the same metadata file.

In the same web storage browser, I can see the bucket I uploaded from cloudberry, and browse its contents, no problem.

Using the google developers web console, the only difference I can see in the bucket list, is that the one from cloudberry is listed as storage class "nearline". The one from the qnap app is listed as "standard". Both buckets for me are "asia", and DRA was not turned on for either.

There is the tiny chance I pre-created the bucket in cloudberry, and chose standard, but I don't think so. The qnap app somehow created the standard storage class, not the nearline.

If I start a new bucket on the qnap, all the same defaults, then check the dev web console afterwards, this time I got a standard bucket, but could browse the files using cloudberry. So maybe standard/nearline has no bearing on clouberry bucket storage/access.

If I edit the job with the issue, I can see its let me upload files to a bucket, without creating a folder. On cloudberry, if I do the same thing, I cannot retrieve the files, as the client is forcing me to choose a subfolder to retreive them. In cloudberry case I can browse the bucket, create a subfolder, move the files in, then retrieve them. If I create a new job, and select the problem bucket from the list, instead of retreiving the metabase like the other buckets for the folder list. The bucket is just underlined in red. You cannot upload to an existing bucket without creating a new folder, so this makes sense.

I've no way of testing this with the qnap to maybe create a folder and see if it lets me move things into it. Even if I create a new folder in that bucket using cloudberry, its still not accessible as a backup destination on qnap, you cannot create another folder (nor can you select the cloudberry created folder in that bucket for retrievals). So maybe its because it let me upload to the bucket without creating a subfolder?

Maybe just people should be aware with storage costs, if you create the bucket from the qnap with the current version, you may end up with more expensive storage.

a) How to I avoid creating standard storage with the qnap app, maybe a choice would be good I guess. Note the default region is America, and I am using Asia, so factor that into any test.

b) Whats the deal with the storage of metadata.tar.gz file, where clearly qnap uploaded and downloaded 3.5gb, but its unbrowsable/unrestoreable from cloudberry. Maybe you need to force people to create at least a top level folder in the bucket to be allowed to upload anything.



Post Reply

Return to “Google Cloud Storage”