Today I had the task of gathering files from a Drupal website into an archive to send to a client. The files were stored in the database and were spread out across the public files and the private files directories. In order to create an archive and prevent the client from manually clicking on 1000 file download links, I did the following.
- Wrote a query to get a list of all the files I needed (they were all grouped with the same taxonomy term).
- Put that list of file paths into a text file, one file path per line.
- Replaced the public:// and private:// uri schema with the full file system path '/home/myuser/path/to/files/'
- Then, I was able to use the following linux find command to traverse that text file:
for file in `cat /path/to/my/textfile.txt`; \
cp "$file" /path/to/my/directory/; \
- Then, all I had to do was tar the directory and I was done. This saved my client hours of tedious work downloading files individually.