Comments on: Deduplication in System Center Data Protection Manager https://www.thomasmaurer.ch/2015/01/deduplication-in-system-center-data-protection-manager/ Cloud and Datacenter Blog focusing on Microsoft Azure Wed, 26 Jun 2019 14:51:38 +0000 hourly 1 https://wordpress.org/?v=6.4.1 By: William https://www.thomasmaurer.ch/2015/01/deduplication-in-system-center-data-protection-manager/#comment-623203 Tue, 08 Sep 2015 16:37:03 +0000 http://www.thomasmaurer.ch/?p=6900#comment-623203 How about long-term storage. How does this fit into this design? We have a direct-attached fiber ATL.

]]>
By: Enrico S. https://www.thomasmaurer.ch/2015/01/deduplication-in-system-center-data-protection-manager/#comment-570415 Sun, 11 Jan 2015 11:52:36 +0000 http://www.thomasmaurer.ch/?p=6900#comment-570415 @ craig jones

It depends on your workload but within typcial Windows Environments you should get easily more than 40% dedup rate. I can see that typcial fileservers getting 40 to 60% and Windows Environment backups climb up to 70 or 80%.

I don’t have experiences with Firestreamer but QuadstorVTL which is Linux based OpenSource.
The dedup rate is pretty much the same but you have to Play around a bit.

Recommendations for VTLs are usually:

don’t use compression from the VTL itself or your prefered backup solution.

But there are considerations.
QuadstorVTL for example has a built in dedup Feature as well.
We use DPM 2012/R2 and virtulalized backups. So … we don’t use compression or dedup from any product on top.
We only use the Serer 2012 R2 dedup Feature.
the built in dedup uses a high amount of RAM and so we would have to set up all machines with high amount of RAM.

If you stream your VTL stuff to another Location you could get better results.

I hop you get the Picture.
Maybe I should write an artcile about the different Scenarios we tested…?! ;)

]]>
By: Enrico S. https://www.thomasmaurer.ch/2015/01/deduplication-in-system-center-data-protection-manager/#comment-570414 Sun, 11 Jan 2015 11:22:11 +0000 http://www.thomasmaurer.ch/?p=6900#comment-570414 btw.

Leaving the DPM Machines running on the same storage depends…

VDI Environments usually avoid the Pagefile.sys on the same storage as the VM is running.
So usually the Pagefile.sys will be seperated from the VM itself und won’t be deduped.

But… If you have a high IO storage System you could leave it there.
We do that but like I said… Flash based Systems with high max IO
If you have HDD based storage, try to seperate the Pagefile.sys as the recommandations from VDI Environments suggest.

]]>
By: Enrico S. https://www.thomasmaurer.ch/2015/01/deduplication-in-system-center-data-protection-manager/#comment-570413 Sun, 11 Jan 2015 11:17:05 +0000 http://www.thomasmaurer.ch/?p=6900#comment-570413 Our Testing Environments are pretty much different and I can say that is possible to expand everything Microsoft says.
To avoid troubles you wouldn’t do anything besides the supported way in productive Environments but we have heaps of smaller Servers with unsupported configurations (mostly less important third Level Systems or testing Environments) and they running like a charm. :)

Since we have heaps of IOs due Flash based storage and RAM we could see a dramatic Impact on dedup and performance boosts withing file or clustered services. It’s amazing… ;)
Virtualized backups are running now since introducing 2012 R2 and there’s no Problem since then at all. :)

increasing Performance a bit with Setting up a higher amount of used ram.
https://virtual-ops.de/?p=481

]]>
By: Miha Pecnik https://www.thomasmaurer.ch/2015/01/deduplication-in-system-center-data-protection-manager/#comment-570408 Sun, 11 Jan 2015 09:21:29 +0000 http://www.thomasmaurer.ch/?p=6900#comment-570408 Thank you Enrico, I actually discussed this with Senior Program Manager – System Center DPM and he confirmed it. Basically saying no other workload is allowed on the Hyper-V servers, but they’re looking into perhaps expanding this in the future.

]]>
By: craig jones https://www.thomasmaurer.ch/2015/01/deduplication-in-system-center-data-protection-manager/#comment-570404 Sun, 11 Jan 2015 08:12:31 +0000 http://www.thomasmaurer.ch/?p=6900#comment-570404 One potential use-case for me would become use a CiB product to house storage for backups to accommodate the SoFS and compute requirements.

Do we know how much reduction to expect as yet, and whether dedup would be effective on VTL applications like firestreamer?

]]>
By: Enrico S. https://www.thomasmaurer.ch/2015/01/deduplication-in-system-center-data-protection-manager/#comment-570391 Sun, 11 Jan 2015 02:30:48 +0000 http://www.thomasmaurer.ch/?p=6900#comment-570391 the 1TB is a recommend max. Amount.
Using sof as well. Its always recommend to dedup volume on sofs to avoid troubles on the hyper-v system itself to separate computing and dedup stuff.

]]>
By: Miha Pecnik https://www.thomasmaurer.ch/2015/01/deduplication-in-system-center-data-protection-manager/#comment-569926 Wed, 07 Jan 2015 12:22:53 +0000 http://www.thomasmaurer.ch/?p=6900#comment-569926 Highly anticipated feature, very glad it’s finally available. A couple questions:

– do you perhaps have any information as to why 1TB VHDX are required (or is it just a recommendation)?,
– is storing VHDX files of the DPM Server on a Scale-Out File Server Cluster really a requirement, it appears to be more of a recommendation: “Q: It looks as though DPM storage VHDX files must be deployed on remote SMB file shares only. What will happen if I store the backup VHDX files on dedup-enabled volumes on the same system where the DPM virtual machine is running?

A: As discussed above, DPM, Hyper-V and dedup are storage and compute intensive operations. Combining all three of them in a single system can lead to I/O and process intensive operations that could starve Hyper-V and its VMs. If you decide to experiment configuring DPM in a VM with the backup storage volumes on the same machine, you should monitor performance carefully to ensure that there is enough I/O bandwidth and compute capacity to maintain all three operations on the same machine.”

]]>