Changing Storage Provider

Posted by Community Admin on 04-Aug-2018 19:17

Changing Storage Provider

All Replies

Posted by Community Admin on 20-Mar-2017 00:00

We are working on a site that currently uses the file system as the default storage provider and would like to switch to using the database.  If we do this, will the site still be able to find the files that are currently stored on the file system, while putting all new/updated files in the database?  In either case, is there an easy way to move the files into the database and maintain the links to those files throughout the site?

Posted by Community Admin on 20-Mar-2017 00:00

Hello Brian,

I went the other way, DB -> filesystem, using the "Move to another Storage" setting and had no issues; no broken links!

Now, question for you (or anyone that has looked into this)—which storage scenario is better? 

I changed from DB to filesystem because I thought I would prevent the DB from growing at the rate it was growing. DB was close to 10GB when made the switch.

The DB size is an issue for me every time I backup the production site (files and DB), to download it and set up a local copy. Transferring the data takes slightly over an hour.

That's a long time!

Thanks,

Gregory

Posted by Community Admin on 20-Mar-2017 00:00

Thank you!

I may try just making the change in our stage environment to see what happens.

My problem is that the site is on a farm and we have multiple environments.  Production is set up to look at a share that none of the other environments have access to.  So when we refresh the other environments with the prod database (which we have to do from time to time in order to do some testing), everything linking those files breaks.  We then have to do file dumps for stage and test, and just work around the breaks in dev.  We can neither move nor link those files to our dev environment, but we can point dev at the stage DB.  In addition, the previous devs set up permissions in an unconventional way, so we have problems with that from time to time.  The file system storage area is not huge, so I am thinking (hoping?) the database won't grow like that.

As for DB file sizes, it may not help in your situation, but on other projects we have set up scripts that run after the nightly backups that put them in ultra compressed ZIP files using 7-Zip.  The compressed file is always the same name, so there is not build up of old ZIP files.  Copying across the network is much quicker this way (of course you have to wait for it to decompress).  

To ultimately answer your question, I think there are a bunch of variables that go into it.  The size and count of files, the environments and how they interconnect, and whether you require versioning to name a few.

This thread is closed