As a part of our current project implementing BranchCache for a customer they wanted to know the efficiency of pushing media files, baked in zip files with small changes in them. How efficient would it be to figure out what has changed in them? Well, since we covered Johan’s 107GB images yesterday I thought I would throw together a few zip files and give it a go, taking a break from some very painful iPXE Anywhere development. (PXE can be a bitch if anyone missed that)
The idea is to demonstrate that the Branch Cache ability to make use of Microoft Data De-Duplication will greatly reduce the size on disk of the cached content and also the amount of data transferred over the WAN from the remote source. Further we hope to demonstrate that De-Dupe has the ability to recognise chunk content as being similar even after compression.
The Setup
There are 3 zip files to be transferred - iPXEDataReporting_1.zip, PXEDataReporting_2zip and iPXEDataReporting_3.zip. Files 1 and 2 are fairly similiar in size and content. File 3 is much larger and contains multiple copies of the original content some compressed a second time.
File 1 – a zip file with a single .mp4 movie in it, shrinked down from about 183MB to 91MB
File 2 – a zip file with the same .mp4 as above and another 15KB .rtf file added to it, resulting in 4KB larger file than File1.
File 3 – a zip file with lots of copies of the same video, also another zip file inside the zip file with a copy of the same .mp4 file. Resulting in a 550MB file.
We host the files on an IIS server and will transfer them using BITS (BranchCache is in BITS on ALL Windows versions FFS!). Please keep in mind that the BranchCache hash is generated at first download unless triggered in some other way.
The Test
First off, flush the BranchCache Cache to make sure that there is no data that could be similar (not very likely) but we need to show the size of the BC Cache after all three files were transferred.
Then kick off the first bits job with bitsadmin.exe (I know I should have used PowerShell but I am an old school kind a guy and I know how many painstaking C++ lines of code that went into Bitsadmin.exe!)
So let’s kick of File 1 and put BranchCache to the test!
The transfers starts and takes a few minutes as I have my machine on a Bits policy.
As expected the entire download was from the server and nothing came via the P2P Cache. Best way to check is to have a look at the BITS-Client event log and look at the reported figures in the Event 60’s, way simpler than looking at PerfStats.
Ok, let’s kick of File 2. It finished in a few seconds so I knew instantly that it worked very well you can see the download speed difference.
Looking at the data figures confirms that things went well. More than 99% were taken from the Cache!.
On to test number 3. How will the big zip file work? Kicked it off and the speed varied greatly from about 100MB/s to about 200KB, so hard to tell.
The Result
What do you think happened? Have a look at the numbers down below:
File1 – Did zero bytes from the Cache, as suspected since it’s new.
File2 – Did 102KB from the server and the rest from the Cache, as stated above expected results.
File3 – Here we pulled down a small 4MB from the server, and the rest came from the Cache. The rusults can’t be shown in percentage as it’s just too confusing.
Ok, so we transferred more or less the entire file from Cache, not even a fragment of a full copy from the original movie went over the wire. Keep in mind, it was 550MB large. And in our Cache we only used up 93MB of disk space.