KGRKJGETMRETU895U-589TY5MIGM5JGB5SDFESFREWTGR54TY
Server : Apache/2.4.58 (Win64) OpenSSL/3.1.3 PHP/8.2.12
System : Windows NT SERVER-PC 10.0 build 26200 (Windows 11) AMD64
User : ServerPC ( 0)
PHP Version : 8.2.12
Disable Function : NONE
Directory :  C:/Windows/System32/en-US/

Upload File :
current_dir [ Writeable ] document_root [ Writeable ]

 

Current File : C:/Windows/System32/en-US/ddputils.dll.mui
MZ����@���	�!�L�!This program cannot be run in DOS mode.

$�X=��9S��9S��9S������9S���Q��9S�Rich�9S�PEL�yo)�!&

 �@ ��8.rdata�@@.rsrc�� �@@�yo)
lPP�yo)$����8.rdata8.rdata$voltmdP�.rdata$zzzdbg P.rsrc$01P&��.rsrc$02 �W����k�4����^����$�º|�}�yo)@�0�H�8�P�h���?��@��A��B��~�����(��@��X��p���������������������0��H��`�9x�?��@��F��L��M��R� �8�	P	`	p	�	�	�	�	�	�	�	�			 	0	@	P	`	p	�	�	�	�	�	�	�	�			 	0H&��'&�('���'H��(V�L*���*J�H,�\.z��3*�9���=��tCb��C���D��xIv��N���S��@X$�d]��(b(�Pe��$h���hf�Hk��lt��n��,p<�hq0��t`�����MUI����ϕAu;�Ad���G�]��UB7��oER�K�c���MUIen-USddpPA
Operation:Context:Error-specific details:Failure:PAErrorVolume nameShadow copy volumeConfiguration file%The domain controller is unavailable.ServerDomain	File name	DirectoryChunk storeChunk ID
Stream mapChunk store container	File pathFile ID
Chunk sizeChunk offsetChunk flags
Recorded time
Error messageSource contextInner error contextError timestampFile offsetPAFailure reasonRetry count
Request IDStream map countChunk count	Data sizePA+Starting File Server Deduplication Service.(Stopping the Data Deduplication service.BChecking the File Server Deduplication global configuration store.PA0Initializing the data deduplication mini-filter.-Sending backup components list to VSS system.Preparing for backup."Performing pre-restore operations.#Performing post-restore operations.+Processing File Server Deduplication event.Creating a chunk store.PAInitializing chunk store.Uninitializing chunk store.Creating a chunk store session.!Committing a chunk store session.Aborting a chunk store session.,Initiating creation of a chunk store stream..Inserting a new chunk to a chunk store stream..Inserting an existing chunk to a chunk stream.,Committing creation of a chunk store stream.*Aborting creation of a chunk store stream..Committing changes to a chunk store container.BChanges made to a chunk store container have been flushed to disk.0Making a new chunk store container ready to use.CRolling back the last committed changes to a chunk store container.-Marking a chunk store container as read-only.,Enumerating all containers in a chunk store.PA6Preparing a chunk store container for chunk insertion.)Initializing a new chunk store container.*Opening an existing chunk store container.1Inserting a new chunk to a chunk store container.#Repairing a chunk store stamp file."Creating a chunk store stamp file.Opening a chunk store stream.5Reading stream map entries from a chunk store stream.Reading a chunk store chunk.Closing a chunk store stream. Reading a chunk store container.)Opening a chunk store container log file.)Reading a chunk store container log file.4Writing entries to a chunk store container log file.,Enumerating chunk store container log files.)Deleting chunk store container log files.PA,Reading a chunk store container bitmap file.,Writing a chunk store container bitmap file.-Deleting a chunk store container bitmap file.(Starting chunk store garbage collection.!Indexing active chunk references.'Processing deleted chunk store streams. Identifying unreferenced chunks.Enumerating the chunk store.(Initializing the chunk store enumerator.#Initializing the stream map parser.Iterating the stream map.$Initializing chunk store compaction."Compacting chunk store containers.2Initializing stream map compaction reconciliation./Reconciling stream maps due to data compaction.(Initializing chunk store reconciliation.0Reconciling duplicate chunks in the chunk store.6Initializing the deduplication garbage collection job.1Running the deduplication garbage collection job.3Canceling the deduplication garbage collection job.AWaiting for the deduplication garbage collection job to complete.#Initializing the deduplication job.Running the deduplication job. Canceling the deduplication job.*Waiting for the deduplication to complete.-Initializing the deduplication scrubbing job.(Running the deduplication scrubbing job.*Canceling the deduplication scrubbing job.8Waiting for the deduplication scrubbing job to complete.Opening a corruption log file.Reading a corruption log file.*Writing an entry to a corruption log file.PA!Enumerating corruption log files.PA&Creating a chunk store chunk sequence.)Adding a chunk to a chunk store sequence.PA.Completing creation of a chunk store sequence.Reading a chunk store sequence."Continuing a chunk store sequence. Aborting a chunk store sequence.,Initializing the deduplication analysis job.'Running the deduplication analysis job.)Canceling the deduplication analysis job.7Waiting for the deduplication analysis job to complete.$Repair chunk store container header./Repair chunk store container redirection table.Repair chunk store chunk.Clone chunk store container.Scrubbing chunk store.'Detecting corruption store corruptions.*Loading the deduplication corruption logs..Cleaning up the deduplication corruption logs.PAFDetermining the set of user files affected by chunk store corruptions.Reporting corruptions.BEstimating memory requirement for the deduplication scrubbing job.3Deep garbage collection initialization has started.:Starting deep garbage collection on stream map containers.4Starting deep garbage collection on data containers. Initialize bitmaps on containersSScanning the reparse point index to determine which stream map is being referenced.Saving deletion bitmap.9Scan the stream map containers to mark referenced chunks."Convert bitmap to chunk delete logCompact Data ContainersCompact Stream Map Containers*Change a chunk store container generation.Start change logging.Stop change logging.PA*Add a merged target chunk store container.&Processing tentatively deleted chunks.Check version of chunk store."Initializing the corruption table.!Writing out the corruption table.#Deleting the corruption table file.Repairing corruptions.(Updating corruption table with new logs.Destroying chunk store.Marking chunk store as deleted.&Inserting corruption entry into table.!Checking chunk store consistency.!Updating a chunk store file list.3Recovering a chunk store file list from redundancy.+Adding an entry to a chunk store file list..Replacing an entry in a chunk store file list.PA-Deleting an entry in a chunk store file list. Reading a chunk store file list./Reading a chunk store container directory file./Writing a chunk store container directory file.0Deleting a chunk store container directory file.=Setting FileSystem allocation for chunk store container file.2Initializing the deduplication unoptimization job.-Running the deduplication unoptimization job.Restoring dedup fileReading dedup informationBuilding container listBuilding read planExecuting read planRunning deep scrubbing.Scanning reparse point index during deep scrub'Logging reparse point during deep scrubPA0Scanning stream map containers during deep scrub Scrubbing a stream map container0Logging a stream map's entries during deep scrub9Reading a container's redirection table during deep scrub*Scanning data containers during deep scrubScrubbing a data containerScrubbing a data chunk"Verifying SM entry to DC hash link"Logging a record during deep scrub0Writing a batch of log records during deep scrub%Finalizing a deep scrub temporary log%Deep scrubbing log manager log record!Finalizing deep scrub log manager)Initializing deep scrub chunk index table3Inserting a chunk into deep scrub chunk index table4Looking up a chunk from deep scrub chunk index table0Rebuilding a chunk index table during deep scrub)Resetting the deep scrubbing logger cache(Resetting the deep scrubbing log manager-Scanning hotspot containers during deep scrubScrubbing a hotspot containerScrubbing the hotspot table8Cleaning up the deduplication deep scrub corruption logs%Computing deduplication file metadata(Scanning recall bitmap during deep scrub"Loading a heat map for a user file!Saving a heat map for a user file(Inserting a hot chunk to a chunk stream.#Deleting a heat map for a user fileCreating shadow copy set.#Initializing scan for optimization.Scanning the NTFS USN journalInitializing the USN scanner#Start a new data chunkstore session commit a data chunkstore session-Initializing the deduplication data port job.(Running the deduplication data port job.*Canceling the deduplication data port job.8Waiting for the deduplication data port job to complete.Lookup chunks request.Insert chunks request.Commit stream maps request.Get streams request.Get chunks request.Initializing workload manager.Canceling a job.Enqueue a job.Initialize job manifest.Launch a job host process.Validate a job host process.Initializing a job.Terminate a job host process. Uninitializing workload manager.Handshaking with a job.Job completion callback.Running a job.!Checking ownership of Csv volume.!Adding Csv volume for monitoring.PATRUEFALSE/<Unknown enum value %ld - configuration error?>	<Unknown>
Unknown errorData Deduplication Service	The Data Deduplication service enables the deduplication and compression of data on selected volumes in order to optimize disk space used.  If this service is stopped, optimization will no longer occur but access to already optimized data will continue to function.PADedupRThe Data Deduplication filter driver enables read/write I/O to deduplicated files.PALThe chunk store on volume %s, Select this if you are using optimized backup.-Data deduplication configuration on volume %s-Data Deduplication Volume Shadow Copy Service_Data Deduplication VSS writer guided backup applications to back up volumes with deduplication.%Data deduplication state on volume %sData deduplication optimization%Data deduplication garbage collectionData deduplication scrubbing!Data deduplication unoptimizationQueuedInitializingRunning	CompletedPending CancelCanceledFailedPA>Data deduplication scrubbing job should be run on this volume.5An unsupported path was detected and will be skipped.Data deduplication dataportNThis task runs the data deduplication optimization job on all enabled volumes.TThis task runs the data deduplication garbage collection job on all enabled volumes.KThis task runs the data deduplication scrubbing job on all enabled volumes.PThis task runs the data deduplication unoptimization job on all enabled volumes.KThis task runs the data deduplication data port job on all enabled volumes.JSVSV||00�PP�pEp$SV�%SV� 'SV�)SV��00SV�0SV��1:SV�>SV�2ASV�ISV�<4PSV��SV�d9��tX���X��Y�)�$u,�7���9�F���I�I���K�K�H�M�Z�h�]�e�Ա����	��X�����!�"�� � ��� � ��� �$ �<�( �D ���(�(��2�2�82�2��#2�2��'2�%2�<0P�P��:P�5P�l<(`�s`��QP�P�q@�@��q@�@��w@�@�x@�@�Xx@�@��x@�@�y@�@�ty@�@��y!@�!@�z#@�%@�`z'@�'@�t{)@�)@��{+@�+@�H|-@�-@��|/@�G@�$}I@�I@�`�K@�M@�Ĉ@�@�,�@�@�؋@�@�،@�@�ԍ@�@�<�@�@��@�@�d� @� @���"@�"@���&@�&@�T�(@�(@�\�*@�*@�`�H@�H@�l�J@�J@�@�.@�.@��,@�,@����A���Ѩ��	�`�TReconciliation of chunk store is due.

hThere are no actions associated with this job.

�Data deduplication cannot runing this job on this Csv volume on this node.

�Data deduplication cannot runing this cmdlet on this Csv volume on this node.

Reporting

Filter

<Kernel mode stream store

8Kernel mode chunk store

@Kernel mode chunk container

8Kernel mode file cache

Info

Start

Stop

Error

Warning

 Information

TData Deduplication Optimization Task

`Data Deduplication Garbage Collection Task

LData Deduplication Scrubbing Task

XData Deduplication Unoptimization Task

<Open stream store stream

4Prepare for paging IO

(Read stream map

 Read chunks

,Compute checksum

0Get container entry

TGet maximum generation for container

4Open chunk container

dInitialize chunk container redirection table

`Validate chunk container redirection table

TGet chunk container valid data length

lGet offset from chunk container redirection table

@Read chunk container block

@Clear chunk container block

 Copy chunk

4Initialize file cache

0Map file cache data

4Unpin file cache data

4Copy file cache data

HRead underlying file cache data

DGet chunk container file size

(Pin stream map

0Pin chunk container

Pin chunk

4Allocate pool buffer

4Unpin chunk container

 Unpin chunk

4Dedup read processing

@Get first stream map entry

0Read chunk metadata

(Read chunk data

8Reference TlCache data

LRead chunk data from stream store

0Assemble chunk data

4Decompress chunk data

LCopy chunk data in to user buffer

HInsert chunk data in to tlcache

XRead data from dedup reparse point file

0Prepare stream map

0Patch clean ranges

@Writing data to dedup file

LQueue write request on dedup file

PDo copy on write work on dedup file

DDo full recall on dedup file

HDo partial recall on dedup file

PDo dummy paging read on dedup file

PRead clean data for recalling file

XWrite clean data to dedup file normally

TWrite clean data to dedup file paged

LRecall dedup file using paging Io

DFlush dedup file after recall

\Update bitmap after recall on dedup file

@Delete dedup reparse point

(Open dedup file

DLocking user buffer for read

@Get system address for MDL

4Read clean dedup file

(Get range state

(Get chunk body

$Release chunk

LRelease decompress chunk context

LPrepare decompress chunk context

HCopy data to compressed buffer

@Release data from TL Cache

<Queue async read request

PThe requested object was not found.

�One (or more) of the arguments given to the task scheduler is not valid.

TThe specified object already exists.

LThe specified path was not found.

HThe specified user is invalid.

HThe specified path is invalid.

HThe specified name is invalid.

XThe specified property is out of range.

�A required filter driver is either not installed, not loaded, or not ready for service.

�There is insufficient disk space to perform the requested operation.

xDeduplication could not be enabled on the specified volume. This might be because the volume uses an unsupported file system, is larger than the maximum supported volume size, is read-only, is formatted with an unsupported cluster size, or is not a fixed drive. Deduplication is supported on fixed, write-enabled ReFS, NTFS, CSVFS_ReFS, or CSVFS_NTFS volumes smaller than or equal to 64 TB, with cluster sizes less than or equal to 64 KB.

Data deduplication encountered an unexpected error. Check the Data Deduplication Operational event log for more information.

`The specified scan log cursor has expired.

�The file system might be corrupted.  Please run the CHKDSK utility.

�A volume shadow copy could not be created or was unexpectedly deleted.

�Data deduplication encountered a corrupted XML configuration file.

�The Data Deduplication service could not access the global configuration because the Cluster service is not running.

�The Data Deduplication service could not access the global configuration because it has not been installed yet.

�Data deduplication failed to access the volume. It may be offline.

�The module encountered an invalid parameter or a valid parameter with an invalid value, or an expected module parameter was not found. Check the operational event log for more information.

�An attempt was made to perform an initialization operation when initialization has already been completed.

�An attempt was made to perform an uninitialization operation when that operation has already been completed.

 The Data Deduplication service detected an internal folder that is not secure. To secure the folder, reinstall deduplication on the volume.

\Data chunking has already been initiated.

�An attempt was made to perform an operation from an invalid state.

�An attempt was made to perform an operation before initialization.

�Call ::PushBuffer to continue chunking or ::Drain to enumerate any partial chunks.

pThe Data Deduplication service detected multiple chunk store folders; however, only one chunk store folder is permitted. To fix this issue, reinstall deduplication on the volume.

4The data is invalid.

PThe process is in an unknown state.

@The process is not running.

`There was an error while opening the file.

�The job process could not start because the job was not found.

�The client process ID does not match the ID of the host process that was started.

xThe specified volume is not enabled for deduplication.

XA zero-character chunk ID is not valid.

LThe index is filled to capacity.

8Session already exists.

lThe compression format selected is not supported.

�The compressed buffer is larger than the uncompressed buffer.

HThe buffer is not large enough.

|Index Scratch Log Error in: Seek, Read, Write, or Create

<The job type is invalid.

TPersistence layer enumeration error.

DThe operation was cancelled.

�This job will not run at the scheduled time because it requires more memory than is currently available.

�The job was terminated while in a cancel or pending state.

�The job was terminated while in a handshake pending state.

lThe job was terminated due to a service shutdown.

XThe job was abandoned before starting.

TThe job process exited unexpectedly.

,The Data Deduplication service detected that the container cannot be compacted or updated because it has reached the maximum generation version. 

lThe corruption log has reached its maximum size.

�The data deduplication scrubbing job failed to process the corruption logs.

�Data deduplication failed to create new chunk store container files. Allocate more space to the volume.

�An error occurred while opening the file because the file was in use. 

�An error was discovered while deduplicating the file. The file is now skipped.

�File Server Deduplication encountered corruption while enumerating chunks in a chunk store.

@The scan log is not valid.

|The data is invalid due to checksum (CRC) mismatch error.

tData deduplication encountered file corruption error.

�Job completed with some errors. Check event logs for more details.

�Data deduplication is not supported on the version of the chunk store found on this volume.

�Data deduplication encountered an unknown version of chunk store on this volume.

�The job was assigned less memory than the minimum it needs to run.

xThe data deduplication job schedule cannot be modified.

�The valid data length of chunk store container is misaligned.

8File access is denied.

�Data deduplication job stopped due to too many corrupted files.

�Data deduplication job stopped due to an internal error in the BCrypt SHA-512 provider.

|Data deduplication job stopped for store reconciliation.

hFile skipped for deduplication due to its size.

hFile skipped due to deduplication retry limit.

PThe pipeline buffer cache is full.

�Another Data deduplication job already running on this volume.

Data deduplication cannot run this job on this Csv volume on this node. Try running the job on the Csv volume resource owner node.

�Data deduplication failed to initialize cluster state on this node.

�Optimization of the range was aborted by the dedup filter driver.

�The operation could not be performed because of a concurrent IO operation.

�Data deduplication encountered an unexpected error. Verify deduplication is enabled on all nodes if in a cluster configuration. Check the Data Deduplication Operational event log for more information.

TData access for data deduplicated CSV volumes can only be disabled when in maintenance mode. Check the Data Deduplication Operational event log for more information.

�Data Deduplication encountered an IO device error that may indicate a hardware fault in the storage subsystem.

Data deduplication cannot run this cmdlet on this Csv volume on this node. Try running the cmdlet on the Csv volume resource owner node.

�Deduplication job not supported during rolling cluster upgrade.

�Deduplication setting not supported during rolling cluster upgrade.

hData port job is not ready to accept requests.

�Data port request not accepted due to request count/size limit exceeded.

�Data port request completed with some errors. Check event logs for more details.

�Data port request failed. Check event logs for more details.

�Data port error accessing the hash index. Check event logs for more details.

�Data port error accessing the stream store. Check event logs for more details.

�Data port file stub error. Check event logs for more details.

�Data port encountered a deduplication filter error. Check event logs for more details.

�Data port cannot commit stream map due to missing chunk. Check event logs for more details.

�Data port cannot commit stream map due to invalid stream map metadata. Check event logs for more details.

�Data port cannot commit stream map due to invalid stream map entry. Check event logs for more details.

�Data port cannot retrieve job interface for volume. Check event logs for more details.

TThe specified path is not supported.

�// Data port cannot decompress chunk. Check event logs for more details.

�Data port cannot calculate chunk hash. Check event logs for more details.

�Data port cannot read chunk stream. Check event logs for more details.

�The target file is not a deduplicated file. Check event logs for more details.

�The target file is partially recalled. Check event logs for more details.

�Near inline deduplication can only be enabled on ReFS tiered volumes.

0Data Deduplication

 Application

LData Deduplication Change Events

�Volume "%1" appears as disconnected and it is ignored by the service.  You may want to rescan disks.   Error: %2.%n%3

The COM Server with CLSID %1 and name "%2" cannot be started on machine "%3". Most likely the CPU is under heavy load.  Error: %4.%n%5

�The COM Server with CLSID %1 and name "%2" cannot be started on machine "%3".  Error: %4.%n%5

hThe COM Server with CLSID %1 and name "%2" cannot be started on machine "%3" during Safe Mode. The Data Deduplication service cannot start while in safe mode.  Error: %4.%n%5

�A critical component required by Data Deduplication is not registered. This might happen if an error occurred during Windows setup, or if the computer does not have the Windows Server 2012 or later version of Deduplication service installed. The error returned from CoCreateInstance on class with CLSID %1 and Name "%2" on machine "%3" is %4.%n%5

�Data Deduplication service is shutting down due to idle timeout.%n%1

�Data Deduplication service is shutting down due to shutdown event from the Service Control Manager.%n%1

�Data Deduplication job of type "%1" on volume "%2" has completed with return code: %3%n%4

�Data Deduplication error: Unexpected error calling routine %1.  hr = %2.%n%3

hData Deduplication error: Unexpected error.%n%1

hData Deduplication warning: %1%nError: %2.%n%3

�Data Deduplication error: Unexpected COM error %1: %2.  Error code: %3.%n%4

�Data Deduplication was unable to access the following file or volume: "%1".  This file or volume might be locked by another application right now, or you might need to give Local System access to it.%n%2

�Data Deduplication encountered an unexpected error during volume scan of volumes mounted at "%1" ("%2"). To find out more information about the root cause for this error please consult the Application/System event log for other Deduplication service, VSS or VOLSNAP errors related with these volumes. Also, you might want to make sure that you can create shadow copies on these volumes by using the VSSADMIN command like this: VSSADMIN CREATE SHADOW /For=C:%n%3

�Data Deduplication was unable to create or access the shadow copy for volumes mounted at "%1" ("%2"). Possible causes include an improper Shadow Copy configuration, insufficient disk space, or extreme memory, I/O or CPU load of the system. To find out more information about the root cause for this error please consult the Application/System event log for other Deduplication service, VSS or VOLSNAP errors related with these volumes. Also, you might want to make sure that you can create shadow copies on these volumes by using the VSSADMIN command like this: VSSADMIN CREATE SHADOW /For=C:%n%3

PData Deduplication was unable to access volumes mounted at "%1" ("%2"). Make sure that dismount or format operations do not happen while running deduplication.%n%3

�Data Deduplication was unable to access a file or volume. Details:%n%n%1%n The volume may be inaccessible for I/O operations or marked read-only. In case of a cluster volume, this may be a transient failure during failover.%n%2

�Data Deduplication was unable to scan volume "%1" ("%2").%n%3

�Data Deduplication detected a corruption on file "%1" at offset ("%2").  If this condition persists then please restore the data from a previous backup.  Corruption details: Structure=%3, Corruption type = %4, Additional data = %5%n%6

dData Deduplication encountered failure while reconciling chunk store on volume "%1". The error code was %2. Reconciliation is disabled for the current optimization job.%n%3

0Data Deduplication encountered corrupted chunk container %1 while performing full garbage collection. The corrupted chunk container is skipped.%n%2

�Data Deduplication could not initialize change log under %1. The error code was %2.%n%3

�Data Deduplication service could not mark chunk container %1 as reconciled. The error code was %2.%n%3

�A Data Deduplication configuration file is corrupted. The system or volume may need to be restored from backup.%n%1

�Data Deduplication was unable to save one of the configuration stores on volume "%1" due to a disk-full error: If the disk is full, please clean it up (extend the volume or delete some files). If the disk is not full, but there is a hard quota on the volume root, please delete, disable or increase this quota.%n%2

LData Deduplication could not access global configuration since the cluster service is not running. Please start the cluster service and retry the operation.%n%1

�Shadow copy "%1" was deleted during storage report generation.  Volume "%2" might be configured with inadequate shadow copy storage area. Data Deduplication could not process this volume.%n%3

xShadow copy creation failed for volume "%1" after retrying for %2 minutes because other shadow copies were being created.  Reschedule the Data Deduplication for a less busy time.%n%3

\Volume "%1" is not supported for shadow copy.  It is possible that the volume was removed from the system.  Data Deduplication service could not process this volume.%n%2

�The volume "%1" has been deleted or removed from the system.%n%2

�Shadow copy creation failed for volume "%1" with error %2.  The volume might be configured with inadequate shadow copy storage area.  File Serve Deduplication service could not process this volume.%n%3

The file system on volume "%1" is potentially corrupted.  Please run the CHKDSK utility to verify and fix the file system.%n%2

Data Deduplication detected an insecure internal folder. To secure the folder, reinstall deduplication on the volume again.%n%1

�Data Deduplication could not find a chunk store on the volume.%n%1

�Data Deduplication detected multiple chunk store folders. To recover, reinstall deduplication on the volume.%n%1

�Data Deduplication detected conflicting chunk store folders: "%1" and "%2".%n%3

<The data is invalid.%n%1

�Data Deduplication scheduler failed to initialize with error "%1".%n%2

�Data Deduplication failed to validate job type "%1" on volume "%2" with error "%3".%n%4

�Data Deduplication failed to start job type "%1" on volume "%2" with error "%3".%n%4

�Data Deduplication detected job type "%1" on volume "%2" uses too much memory. %3 MB is assigned. %4 MB is used.%n%5

�Data Deduplication detected job type "%1" on volume "%2" memory usage has dropped to desirable level.%n%3

�Data Deduplication cancelled job type "%1" on volume "%2". It uses too much memory than the amount assigned to it.%n%3

Data Deduplication cancelled job type "%1" on volume "%2". Memory resource is running low on the machine or in the job.%n%3

�Data Deduplication job type "%1" on volume "%2" failed to report completion to the service with error: %3.%n%4

�Data Deduplication detected a container cannot be compacted or updated because it has reached the maximum generation.%n%1

|Data Deduplication corruption log "%1" is corrupted.%n%2

�Data Deduplication corruption log "%1" has reached maximum allowed size "%2". Please run scrubbing job to process corruption log. No more corruptions will be reported until the log is processed.%n%3

0Data Deduplication corruption log "%1" has reached maximum allowed size "%2". No more corruptions will be reported until the log is processed.%n%3

�Data Deduplication scheduler failed to uninitialize with error "%1".%n%2

Data Deduplication detected a new container could not be created in a chunk store because it ran out of available container Id.%n%1

�Data Deduplication full garbage collection phase 1 (cleaning file related metadata) on volume "%1" failed with error: %2.  The job will continue with phase 2 execution (data chunk cleanup).%n%3

DData Deduplication full garbage collection could not achieve maximum space reclamation because delete logs for data container %1 could not be cleaned up.%n%2

dSome files could not be deduplicated because of FSRM Quota violations on volume %1. Files skipped are likely compressed or sparse files in folders which are at quota or close to their quota limit. Please consider increasing the quota limit for folders that are at their quota limit or close to it.%n%2

�Data Deduplication failed to dedup file %1 "%2" due to fatal error %3%n%4

�Data Deduplication encountered corruption while accessing a file in chunk store.%n%1

Data Deduplication encountered corruption while accessing a file in chunk store. Please run scrubbing job for diagnosis and repair.%n%1

@Data Deduplication encountered checksum (CRC) mismatch error while accessing a file in chunk store. Please run scrubbing job for diagnosis and repair.%n%1

�Data Deduplication is unable to access file %1 because the file is in use.%n%2

�Data Deduplication encountered checksum (CRC) mismatch error while accessing a file in chunk store.%n%1

Data Deduplication cannot run the job on volume %1 because the dedup store version compatiblity check failed with error %2.%n%3

 Data Deduplication has disabled the volume %1 because it has discovered too many corruptions. Please run deep scrubbing on the volume.%n%2

Data Deduplication has detected a corrupt corruption metadata file on the store at %1. Please run deep scrubbing on the volume.%n%2

Volume "%1" cannot be enabled for Data Deduplication. Data Deduplication does not support volumes larger than 64TB. Error: %2.%n%3

�Data Deduplication cannot be enabled on SIS volume "%1". Error: %2.%n%3

File-system is configured for case-sensitive file/folder names. Data Deduplication does not support case-sensitive file-system mode.%n%1

�Data Deduplication changed scrubbing job to read-only due to insufficient disk space.%n%1

 Data Deduplication has disabled the volume %1 because there are missing or corrupt containers. Please run deep scrubbing on the volume.%n%2

tData Deduplication encountered a disk-full error.%n%1

�Data Deduplication job cannot run on volume "%1" due to insufficient disk space.%n%2

�Data Deduplication job cannot run on offline volume "%1".%n%2

�Data Deduplication recovered a corrupt or missing file.%n%1

tData Deduplication encountered a corrupted metadata file. To correct the problem, schedule or manually run a Garbage Collection job on the affected volume with the -Full option.%n%1

8Data Deduplication encountered chunk %1 with corrupted header while updating container. The corrupted chunk is replicated to the new container %2.%n%3

TData Deduplication encountered chunk %1 with transient header corruption while updating container. The corrupted chunk is NOT replicated to the new container %2.%n%3

�Data Deduplication failed to read chunk container redirection table from file %1 with error %2.%n%3

�Data Deduplication failed to initialize reparse point index table for deep scrubbing from file %1 with error %2.%n%3

�Data Deduplication failed to deep scrub container file %1 on volume %2 with error %3.%n%4

�Data Deduplication failed to load stream map log for deep scrubbing from file %1 with error %2.%n%3

�Data Deduplication found a duplicate local chunk id %1 in container file %2.%n%3

�Data Deduplication job type "%1" on volume "%2" was cancelled manually.%n%3

�Scheduled data Deduplication job type "%1" on volume "%2" was cancelled.%n%3

�The Data Deduplication chunk store statistics file on volume "%1" is corrupted and will be reset. Statistics will be updated by a subsequent job and can be updated manually by running the Update-DedupStatus cmdlet.%n%2

�The Data Deduplication volume statistics file on volume "%1" is corrupted and will be reset. Statistics will be updated by a subsequent job and can be updated manually by running the Update-DedupStatus cmdlet.%n%2

�Data Deduplication failed to append to deep scrubbing log file %1 with error %2.%n%3

�Data Deduplication encountered a failure during deep scrubbing on store %1 with error %2.%n%3

�Data Deduplication cancelled job type "%1" on volume "%2". The job violated Csv dedup job placement policy.%n%3

�Data Deduplication cancelled job type "%1" on volume "%2". The csv job monitor has been uninitialized.%n%3

4Data Deduplication encountered a IO device error while accessing a file on the volume. This is likely a hardware fault in the storage subsystem.%n%1

(Data Deduplication encountered an unexpected error. If this is a cluster, verify Data Deduplication is enabled on all nodes of the cluster.%n%1

�Attempted to disable data access for data deduplicated CSV volume "%1" without maintenance mode. Data access can only be disabled for a CSV volume when in maintenance mode. Place volume into maintenance mode and retry.%n%2

�Data Deduplication service could not unoptimize file "%5%6%7". Error %8, "%9".

�Data Deduplication service failed to unoptimize too many files %3. Some files are not reported.

�Data Deduplication service has finished unoptimization on volume %3 with no errors.

�Data Deduplication service has finished unoptimization on volume %3 with %4 errors.

D%1 job has started.%n%nVolume: %4 (%3)%nAvailable memory: %5 MB%nAvailable cores: %6%nInstances: %7%nReaders per instance: %8%nPriority: %9%nIoThrottle: %10

%1 job has started.%n%nVolume: %4 (%3)%nAvailable memory: %5 MB%nAvailable cores: %6%nPriority: %7%nFull: %8%nVolume free space (MB): %9

�%1 job has started.%n%nVolume: %4 (%3)%nAvailable memory: %5 MB%nPriority: %6%nFull: %7%nRead-only: %8

�%1 job has started.%n%nVolume: %4 (%3)%nAvailable memory: %5 MB%nPriority: %6

%1 job has completed.%n%nVolume: %4 (%3)%nError code: %5%nError message: %6%nSavings rate (percent): %7%nSaved space (MB): %8%nVolume used space (MB): %9%nVolume free space (MB): %10%nOptimized file count: %11%nIn-policy file count: %12%nJob processed space (MB): %13%nJob elapsed time (seconds): %18%nJob throughput (MB/second): %19%nChurn processing throughput (MB/second): %20

�%1 job has completed.%n%nFull: %2%nVolume: %5 (%4)%nError code: %6%nError message: %7%nFreed up space (MB): %8%nVolume free space (MB): %9%nJob elapsed time (seconds): %10%nJob throughput (MB/second): %11

�%1 job has completed.%n%nVolume: %4 (%3)%nError code: %5%nError message: %6

�%1 job has completed.%n%nFull: %2%nRead-only: %3%nVolume: %6 (%5)%nError code: %7%nError message: %8%nTotal corruption count: %9%nFixable corruption count: %10%n%nWhen corruptions are found, check more details in Scrubbing event channel.

�%1 job has completed.%n%nVolume: %4 (%3)%nError code: %5%nError message: %6%nUnoptimized file count: %7%nJob processed space (MB): %8%nJob elapsed time (seconds): %9%nJob throughput (MB/second): %10

�%1 job has been queued.%n%nVolume: %4 (%3)%nSystem memory percent: %5 %nPriority: %6%nSchedule mode: %7

�Restore of deduplicated file "%1" failed with the following error: %2, "%3".

pPriority %1 job has started.%n%nVolume: %4 (%3)%nFile ID: %11%nAvailable memory: %5 MB%nAvailable cores: %6%nInstances: %7%nReaders per instance: %8%nPriority: %9%nIoThrottle: %10

�%1 job has started.%n%nVolume: %4 (%3)%nAvailable memory: %5 MB%nAvailable threads: %6%nPriority: %7

@%1 job has completed.%n%nVolume: %4 (%3)%nError code: %5%nError message: %6%nSavings rate (percent): %7%nSaved space (MB): %8%nVolume used space (MB): %9%nVolume free space (MB): %10%nOptimized file count: %11%nChunk lookup count: %12%nInserted chunk count: %13%nInserted chunks logical data (MB): %14%nInserted chunks physical data (MB): %15%nCommitted stream count: %16%nCommitted stream entry count: %17%nCommitted stream logical data (MB): %18%nRetrieved chunks physical data (MB): %19%nRetrieved stream logical data (MB): %20%nDataPort time (seconds): %21%nJob elapsed time (seconds): %22%nIngress throughput (MB/second): %23%nEgress throughput (MB/second): %24

�Data Deduplication detected a non-clustered volume specified for the chunk index cache volume in a clustered deployment. The configuration is not recommended because it may result in job failures after failover.%n%nVolume: %3 (%2)

�DataPort status update.  %n%nVolume: %2%nSavings rate (percent): %3%nSaved space (MB): %4%nVolume used space (MB): %5%nVolume free space (MB): %6%nOptimized file count: %7%nChunk lookup count: %8%nInserted chunk count: %9%nInserted chunks logical data (MB): %10%nInserted chunks physical data (MB): %11%nCommitted stream count: %12%nCommitted stream entry count: %13%nCommitted stream logical data (MB): %14%nRetrieved chunks physical data (MB): %15%nRetrieved stream logical data (MB): %16%nDataPort time (seconds): %17%nJob elapsed time (seconds): %18%nIngress throughput (MB/second): %19%nEgress throughput (MB/second): %20

�Data Deduplication detected job type "%1" on volume "%2" working set is low. Ratio to commit size is %3.%n%4

�Data Deduplication detected job type "%1" on volume "%2" working set has recovered to desirable level.%n%3

Data Deduplication detected job type "%1" on volume "%2" page fault rate is high. The rate is %3 page faults per second.%n%4

0Data Deduplication detected job type "%1" on volume "%2" page fault rate has lowered to desirable level. The rate is %3 page faults per second.%n%4

�Data Deduplication failed to dedup file "%1" with file ID %2 due to non-fatal error %3%n%4.%n%nNote: You can retrieve the file name by running the command FSUTIL FILE QUERYFILENAMEBYID on the file in question.

�Data Deduplication has aborted a group commit session.%n%nFile count: %1%nError: %2%n%3

XFail to open dedup setting registry key

�Data Deduplication failed to dedup file "%1" with file ID %2 due to oplock break%n%3

�Data Deduplication failed to load hotspot table from file %1 due to error %2.%n%3

�Data Deduplication failed to initialize oplock.%n%nFile ID: %1%nFile name: "%2"%nError: %3%n%4

�Data Deduplication while running job on volume %1 detected invalid physical sector size %2. Using default value %3.%n%4

�Data Deduplication detected an unsupported chunk store container.%n%1

,Data Deduplication could not create window to receive task scheduler stop message due to error %1. Task(s) may not stop after duration limit.%n%2

0Data Deduplication could not create thread to poll for task scheduler stop message due to error %1. Task(s) may not stop after duration limit.%n%2

�An attempt was made to perform an initialization operation when initialization has already been completed.%n%1

lData Deduplication created emergency file %1.%n%3

�Data Deduplication failed to create emergency file %1 with error %2.%n%3

lData Deduplication deleted emergency file %1.%n%3

�Data Deduplication failed to delete emergency file %1 with error %2.%n%3

�Data Deduplication detected a chunk store container with misaligned valid data length.%n%1

Data Deduplication Garbage Collection encountered a delete log entry with an invalid stream map signature for stream map Id %1.%n%2

Data Deduplication failed to initialize oplock as the file appears to be missing.%n%nFile ID: %1%nFile name: "%2"%nError: %3%n%4

�Data Deduplication skipped too many file-level errors. We will not log more than %1 file-level errors per job.%n%2

lData Deduplication diagnostic warning.%n%n%1%n%2

tData Deduplication diagnostic information.%n%n%1%n%2

�Data Deduplication found file %1 with a stream map id %2 in container file %3 marked for deletion.%n%4

xFailed to enqueue job of type "%1" on volume "%2".%n%3

�Error terminating job host process for job type "%1" on volume "%2" (process id: %3).%n%4

TData Deduplication encountered corrupted chunk %1 while updating container. Corrupted data that cannot be repaired will be copied as-is to the new container %2.%n%3

�Data Deduplication job type "%1" on volume "%2" failed to exit gracefully.%n%3

�Data Deduplication job host for job type "%1" on volume "%2" exited unexpectedly.%n%3

(Data Deduplication has failed to load corruption metadata file on the store at %1 due to error %2. Please run deep scrubbing on the volume.%n%3

�Data Deduplication full garbage collection phase 1 on volume "%1" encountered an error %2 while processing file %3. Phase 1 will need to be aborted since garbage collection of file-related metadata is unsafe to continue on file errors.%n%4

Data Deduplication has failed to process corruption metadata file %1 due to error %2. Please run deep scrubbing on the volume.%n%3

�Data Deduplication has failed to load a corrupted metadata file %1 due to error %2. Deleting the file and continuing.%n%3

�Data Deduplication has failed to set NTFS allocation size for container file %1 due to error %2.%n%3

�Data Deduplication configured to use BCrypt provider '%1' for hash algorithm %2.%n%3

PData Deduplication could not use BCrypt provider '%1' for hash algorithm %2 due to an error in operation %3. Reverting to the Microsoft primitive CNG provider.%n%4

�Data Deduplication failed to include file "%1" in file metadata analysis calculations.%n%2

�Data Deduplication failed to include stream map %1 in file metadata analysis calculations.%n%2

�Data Deduplication encountered an error for file "%1" while scanning files and folders.%n%2

pData Deduplication encountered an error while attempting to resume processing. Please consult the event log parameters for more details about the current file being processed.%n%1

�Data Deduplication encountered an error %1 whle scanning usn journal on volume %2 for updating hot range tracking.%n%3

�Data Deduplication could not truncate the stream of an optimized file. No action is required. Error: %1%n%n%2

%1 job memory requirements.%n%nVolume: %4 (%3)%nMinimum memory: %5 MB%nMaximum memory: %6 MB%nMinimum disk: %7 MB%nMaximum cores: %8

l%1 reconciliation has started.%n%nVolume: %4 (%3)

�%1 reconciliation has completed.%n%nGuidance: This event is expected when Reconciliation has completed, there is no recommended or required action. Reconciliation is an internal process that allows Optimization and DataPort jobs to run when the entire Deduplication chunk index cannot be loaded into memory. %n%nVolume: %4 (%3)%nReconciled containers: %5%nUnreconciled containers: %6%nCatchup references: %7%nCatchup containers: %8%nReconciled references: %9%nReconciled containers: %10%nCross-reconciled references: %11%nCross-reconciled containers: %12%nError code: %13%nError message: %14

@%1 job on volume %4 (%3) was configured with insufficient memory.%n%nSystem memory percentage: %5%nAvailable memory: %8 MB%nMinimum required memory: %6 MB

|Optimization memory details for %1 job on volume %3 (%2).

�An open file was skipped during optimization. No action is required.%n%nFileId: %2%nSkip Reason: %1

�An operation succeeded after one or more retries. Operation: %1; FileId: %3; Number of retries: %2

�Data Deduplication aborted the optimization pipeline.%nVolumePath: %1%nErrorCode: %2%nErrorMessage: %3Details: %4

�Data Deduplication aborted a file.%nFileId: %1%nFilePath: %2%nFileSize: %3%nFlags: %4%nTotalRanges: %5%nSkippedRanges: %6%nAbortedRanges: %7%nCommittedRanges: %8%nErrorCode: %9%nErrorMessage: %10Details: %11

,Data Deduplication aborted a file range.%nFileId: %1%nFilePath: %2%nRangeOffset: %3%nRangeLength: %4%nErrorCode: %5%nErrorMessage: %6Details: %7

Data Deduplication aborted a session.%nMaxSize: %1%nCurrentSize: %2%nRemainingRanges: %3%nErrorCode: %4%nErrorMessage: %5Details: %6

�USN journal created.%n%nVolume: %2 (%1)%nMaximum size %3 MB%nAllocation size %4 MB

tDataPort memory details for %1 job on volume %3 (%2).

DData deduplication detected a file with an ID that is not supported.  Files with identifiers unpackable into 64-bits will be skipped. FileId: %1 FileName: %2

�Reconciliation should be run to ensure optimal savings.%n%nGuidance: This event is expected when Reconciliation is turned off for the DataPort job. Reconciliation is an internal process that allows Optimization and DataPort jobs to run when the entire Deduplication chunk index cannot be loaded into memory. When Reconciliation would require 50% or more of the memory on the system, it is recommended that you (temporarily) cease running a DataPort job against this volume, and run an Optimization job. If Reconciliation is not run through an Optimization job before Reconciliation would require more than 100% of system memory, Reconciliation will not be able to be run again (unless more memory is added). This would result in permanent decreased space efficiency on this volume.%n%nVolume: %2 (%1)%nMemory percentage required: %3

`Data Deduplication optimization job will not run the reconciliation step due to inadequate memory.%n%nGuidance: Deduplication savings will be suboptimal until the optimization job is provided more memory, or more more memory is added to the system.%n%nVolume: %2 (%1)%nMemory percentage required: %3

�Data Deduplication service detected corruption in "%5%6%7". The corruption cannot be repaired.

�Data Deduplication service detected corruption (%7) in "%6". See the event details for more information.

\Data Deduplication service detected a corrupted item (%11 - %13, %8, %9, %10, %12) in Deduplication Chunk Store on volume %4. See the event details for more information.

�Data Deduplication service has finished scrubbing on volume %3. It did not find any corruption since the last scrubbing.

�Data Deduplication service found %4 corruption(s) on volume %3. All corruptions are fixed.

�Data Deduplication service found %4 corruption(s) on volume %3. %5 corruption(s) are fixed. %6 user file(s) are corrupted. %7 user file(s) are fixed. For the corrupted file list, see the Microsoft/Windows/Deduplication/Scrubbing events.

�Data Deduplication service found too many corruptions on volume %3. Some corruptions are not reported.

�Data Deduplication service has finished scrubbing on volume %3. See the event details for more information.

�Data Deduplication service encountered error while processing file "%5%6%7". The error was %8.

LData Deduplication service encountered too many errors while processing file on volume %3. The threshold was %4. Some user file corruptions may not be reported.

 Data Deduplication service encountered error while detecting corruptions in chunk store on volume %3. The error was %4. The job is aborted.

PData Deduplication service encountered error while loading corruption logs on volume %3. The error was %4. The job continues. Some corruptions may not be detected.

LData Deduplication service encountered error while cleaning up corruption logs on volume %3. The error was %4. Some corruptions may be reported again next time.

PData Deduplication service encountered error while loading hotspots mapping from chunk store on volume %3. The error was %4. Some corruptions may not be repaired.

PData Deduplication service encountered error while determining corrupted user files on volume %3. The error was %4. Some user file corruptions may not be reported.

�Data Deduplication service found %4 corruption(s) on volume %3. %6 user file(s) are corrupted. %7 user file(s) are fixable. Please run scrubbing job in read-write mode to attempt fixing reported corruptions.

|Data Deduplication service fixed corruption in "%5%6%7".

Data Deduplication service detected fixable corruption in "%5%6%7". Please run scrubbing job in read-write mode to fix this corruption.

Data Deduplication service encountered error while repairing corruptions on volume %3. The error was %4. The repair is unsuccessful.

DData Deduplication service detected a corrupted item (%6, %7, %8, %9) in Deduplication Chunk Store on volume %4. See the event details for more information.

tContainer (%8,%9) with user data is missing from the chunk store. Missing container may result from incomplete restore, incomplete migration or file-system corruption. Volume is disabled from further optimization. It is recommended to restore the volume prior to enabling the volume for further optimization.

@Data Deduplication service encountered error while scaning dedup user files on volume %3. The error was %4. Some user file corruptions may not be reported.

�Data Deduplication service encountered error while processing file "%5%6%7". The error was %8.

LData Deduplication service encountered too many errors while processing file on volume %3. The threshold was %4. Some user file corruptions may not be reported.

DData Deduplication service detected potential data loss (%9) in "%6" due to sharing reparse data with file "%8". See the event details for more information.

HContainer (%8,%9) with user data is corrupt in the chunk store. It is recommended to restore the volume prior to enabling the volume for further optimization.

|Open stream store stream (StartingChunkId %1, FileId %2)

TOpen stream store stream completed %1

dPrepare for paging IO (Stream %1, FileId %2)

PPrepare for paging IO completed %1

DRead stream map completed %1

�Read chunks (Stream %1, FileId %2, IoType %3, FirstRequestChunkId %4, NextRequest %5)

<Read chunks completed %1

`Compute checksum (ItemType %1, DataSize %2)

DCompute checksum completed %1

pGet container entry (ContainerId %1, Generation %2)

LGet container entry completed %1

�Get maximum generation for container (ContainerId %1, Generation %2)

lGet maximum generation for container completed %1

�Open chunk container (ContainerId %1, Generation %2, RootPath %4)

LOpen chunk container completed %1

�Initialize chunk container redirection table (ContainerId %1, Generation %2)

|Initialize chunk container redirection table completed %1

�Validate chunk container redirection table (ContainerId %1, Generation %2)

xValidate chunk container redirection table completed %1

�Get chunk container valid data length (ContainerId %1, Generation %2)

pGet chunk container valid data length completed %1

�Get offset from chunk container redirection table (ContainerId %1, Generation %2)

�Get offset from chunk container redirection table completed %1

�Read chunk container block (ContainerId %1, Generation %2, Buffer %3, Offset %4, Length %5, IoType %6, Synchronous %7)

XRead chunk container block completed %1

�Clear chunk container block (Buffer %1, Size %2, BufferType %3)

\Clear chunk container block completed %1

�Copy chunk (Buffer %1, Size %2, BufferType %3, BufferOffset %4, OutputCapacity %5)

8Copy chunk completed %1

�Initialize file cache (UnderlyingFileObject %1, CacheFileSize %2)

PInitialize file cache completed %1

�Map file cache data (CacheFileObject %1, Offset %2, Length %3)

LMap file cache data completed %1

DUnpin file cache data(Bcb %1)

PUnpin file cache data completed %1

�Copy file cache data (CacheFileObject %1, Offset %2, Length %3)

LCopy file cache data completed %1

�Read underlying file cache data (CacheFileObject %1, UnderlyingFileObject %2, Offset %3, Length %4)

dRead underlying file cache data completed %1

�Get chunk container file size (ContainerId %1, Generation %2)

`Get chunk container file size completed %1

�Pin stream map (Stream %1, FileId %2, StartIndex %3, EntryCount %4)

@Pin stream map completed %1

pPin chunk container (ContainerId %1, Generation %2)

LPin chunk container completed %1

\Pin chunk (ContainerId %1, Generation %2)

8Pin chunk completed %1

lAllocate pool buffer (ReadLength %1, PagingIo %2)

LAllocate pool buffer completed %1

tUnpin chunk container (ContainerId %1, Generation %2)

PUnpin chunk container completed %1

`Unpin chunk (ContainerId %1, Generation %2)

<Unpin chunk completed %1

�Dedup read processing (FileObject %1, Offset %2, Length %3, IoType %4)

PDedup read processing completed %1

�Get first stream map entry (TlCache %1, Stream %2, RequestStartOffset %3, RequestEndOffset %4)

XGet first stream map entry completed %1

Read chunk metadata (Stream %1, CurrentOffset %2, AdjustedFinalOffset %3, FirstChunkByteOffset %4, ChunkRequestsEndOffset %5, TlCache %6)

LRead chunk metadata completed %1

�Read chunk data (TlCache %1, Stream %2, RequestStartOffset %3, RequestEndOffset %4)

DRead chunk data completed %1

hReference TlCache data (TlCache %1, Stream %2)

PReference TlCache data completed %1

dRead chunk data from stream store (Stream %1)

hRead chunk data from stream store completed %1

0Assemble chunk data

LAssemble chunk data completed %1

4Decompress chunk data

PDecompress chunk data completed %1

pCopy chunk data in to user buffer (BytesCopied %1)

hCopy chunk data in to user buffer completed %1

HInsert chunk data in to tlcache

dInsert chunk data in to tlcache completed %1

�Read data from dedup reparse point file (FileObject %1, Offset %2, Length %3)

dRead underlying file cache data completed %1

TPrepare stream map (StreamContext %1)

HPrepare stream map completed %1

|Patch clean ranges (FileObject %1, Offset %2, Length %3)

HPatch clean ranges completed %1

�Writing data to dedup file (FileObject %1, Offset %2, Length %3, IoType %4)

XWriting data to dedup file completed %1

�Queue write request on dedup file (FileObject %1, Offset %2, Length %3)

hQueue write request on dedup file completed %1

�Do copy on write work on dedup file (FileObject %1, Offset %2, Length %3)

lDo copy on write work on dedup file completed %1

�Do full recall on dedup file (FileObject %1, Offset %2, Length %3)

\Do full recall on dedup file completed %1

�Do partial recall on dedup file (FileObject %1, Offset %2, Length %3)

dDo partial recall on dedup file completed %1

�Do dummy paging read on dedup file (FileObject %1, Offset %2, Length %3)

hDo dummy paging read on dedup file completed %1

�Read clean data for recalling file (FileObject %1, Offset %2, Length %3)

hRead clean data for recalling file completed %1

�Write clean data to dedup file normally (FileObject %1, Offset %2, Length %3)

`Write clean data to dedup file completed %1

�Write clean data to dedup file paged (FileObject %1, Offset %2, Length %3)

lWrite clean data to dedup file paged completed %1

�Recall dedup file using paging Io (FileObject %1, Offset %2, Length %3)

hRecall dedup file using paging Io completed %1

dFlush dedup file after recall (FileObject %1)

`Flush dedup file after recall completed %1

�Update bitmap after recall on dedup file (FileObject %1, Offset %2, Length %3)

tUpdate bitmap after recall on dedup file completed %1

`Delete dedup reparse point (FileObject %1)

XDelete dedup reparse point completed %1

DOpen dedup file (FilePath %1)

DOpen dedup file completed %1

DLocking user buffer for read

\Locking user buffer for read completed %1

@Get system address for MDL

XGet system address for MDL completed %1

�Read clean dedup file (FileObject %1, Offset %2, Length %3)

PRead clean dedup file completed %1

XGet range state (Offset %1, Length %2)

DGet range state completed %1

(Get chunk body

@Get chunk body completed %1

$Release chunk

@Release chunk completed %1

lRelease decompress chunk context (BufferSize %1)

dRelease decompress chunk context completed %1

lPrepare decompress chunk context (BufferSize %1)

dPrepare decompress chunk context completed %1

hCopy data to compressed buffer (BufferSize %1)

`Copy data to compressed buffer completed %1

@Release data from TL Cache

XRelease data from TL Cache completed %1

�Queue async read request (FileObject %1, Offset %2, Length %3)

TQueue async read request complete %1

�Read stream map (Stream %1, FileId %2, StartIndex %3, EntryCount %4)

XCreate chunk container (%1 - %2.%3.ccc)

PCreate chunk container completed %1

TCopy chunk container (%1 - %2.%3.ccc)

LCopy chunk container completed %1

XDelete chunk container (%1 - %2.%3.ccc)

PDelete chunk container completed %1

\Rename chunk container (%1 - %2.%3.ccc%4)

PRename chunk container completed %1

XFlush chunk container (%1 - %2.%3.ccc)

PFlush chunk container completed %1

\Rollback chunk container (%1 - %2.%3.ccc)

TRollback chunk container completed %1

hMark chunk container (%1 - %2.%3.ccc) read-only

`Mark chunk container read-only completed %1

�Write chunk container (%1 - %2.%3.ccc) redirection table at offset %4 (Entries: StartIndex %5, Count %6)

tWrite chunk container redirection table completed %1

\Write chunk container header completed %1

TInsert data chunk header completed %1

pInsert data chunk body completed %1 with ChunkId %2

TWrite delete log header completed %1

XAppend delete log entries completed %1

HDelete delete log completed %1

HRename delete log completed %1

\Write chunk container bitmap completed %1

`Delete chunk container bitmap completed %1

dWrite merge log (%5 - %6.%7.merge.log) header

PWrite merge log header completed %1

\Insert hotspot chunk header completed %1

xInsert hotspot chunk body completed %1 with ChunkId %2

`Insert stream map chunk header completed %1

|Insert stream map chunk body completed %1 with ChunkId %2

TAppend merge log entries completed %1

XDelete merge log (%1 - %2.%3.merge.log)

DDelete merge log completed %1

XFlush merge log (%1 - %2.%3.merge.log)

DFlush merge log completed %1

hUpdate file list entries (Remove: %1, Add: %2)

TUpdate file list entries completed %1

Set dedup reparse point on %2 (FileId %1) (ReparsePoint: SizeBackedByChunkStore %3, StreamMapInfoSize %4, StreamMapInfo %5)

\Set dedup reparse point completed %1 (%2)

TSet dedup zero data on %2 (FileId %1)

LSet dedup zero data completed %1

<Flush reparse point files

XFlush reparse point files completed %1

<Set sparse on file id %1

8Set sparse completed %1

�FSCTL_SET_ZERO_DATA on file id %1 at offset %2 and BeyondFinalZero %3

LFSCTL_SET_ZERO_DATA completed %1

�Rename chunk container bitmap (%1 - %2) for chunk container (%1 - %3.%4)

`Rename chunk container bitmap completed %1

Insert padding chunk header to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorupted %6, DataSize %7)

\Insert padding chunk header completed %1

Insert padding chunk body to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorrupted %6, DataSize %7)

xInsert padding chunk body completed %1 with ChunkId %2

�Insert batch of chunks to chunk container (%1 - %2.%3.ccc) at offset %4 (BatchChunkCount %5, BatchDataSize %6)

PInsert batch of chunks completed %1

dWrite chunk container directory completed %1

dDelete chunk container directory completed %1

�Rename chunk container directory (%1 - %2) for chunk container (%1 - %3.%4)

dRename chunk container directory completed %1

�Write chunk container (%5 - %6.%7.ccc) header at offset %8 (Header: USN %9, VDL %10, #Chunk %11, NextLocalId %12, Flags %13, LastAppendTime %14, BackupRedirectionTableOfset %15, LastReconciliationLocalId %16)

Insert data chunk header to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorupted %6, DataSize %7)

�Insert data chunk body to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorrupted %6, DataSize %7)

hWrite delete log (%5 - %6.%7.delete.log) header

�Append delete log (%1 - %2.%3.delete.log) entries at offset %4 (Entries: StartIndex %5, Count %6)

\Delete delete log (%1 - %2.%3.delete.log)

\Rename delete log (%1 - %2.%3.delete.log)

�Write chunk container bitmap (%1 - %2) for chunk container (%1 - %3.%4) (Bitmap: BitLength %5, StartIndex %6, Count %7)

�Delete chunk container bitmap (%1 - %2) for chunk container (%1 - %3.%4)

Insert hotspot chunk header to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorrupted %6, DataSize %7)

Insert hotspot chunk body to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorrupted %6, DataSize %7)

Insert stream map chunk header to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorrupted %6, DataSize %7)

�Write chunk container directory (%1 - %2) for chunk container (%1 - %3.%4) (Directory: EntryCount %5)

�Delete chunk container directory (%1 - %2) for chunk container (%1 - %3.%4)

�Append merge log (%1 - %2.%3.merge.log) entries at offset %4 (Entries: StartIndex %5, Count %6)

PInsert stream map chunk body to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorrupted %6, DataSize %7) (Entries: StartIndex %8, Count %9)

$Chunk header

 Chunk body

,Container header

@Container redirection table

$Hotspot table

,Delete log header

,Delete log entry

,GC bitmap header

(GC bitmap entry

,Merge log header

(Merge log entry

Data

 Stream map

Hotspot

$Optimization

0Garbage Collection

Scrubbing

(Unoptimization

Analysis

Low

Normal

High

Cache

Non-cache

Paging

 Memory map

,Paging memory map

None

Pool

 PoolAligned

MDL

Map

Cached

NonCached

Paged

(container file

(file list file

,file list header

(file list entry

8primary file list file

4backup file list file

Scheduled

Manual

4recall bitmap header

0recall bitmap body

4recall bitmap missing

$Recall bitmap

Unknown

HThe pipeline handle was closed

4The file was deleted

<The file was overwritten

4The file was recalled

TA transaction was started on the file

8The file was encrypted

8The file was compressed

TSet Zero Data was called on the file

\Extended Attributes were set on the file

LA section was created on the file

0The file was shrunk

pA long-running IO operation prevented optimization

8An IO operation failed

8Notifying Optimization

<Setting the Reparse Point

0Truncating the file

DataPort

None

LZNT1

Xpress

 Xpreff Huff

None

Standard

Max

Hybrid

None

$Bad checksum

4Inconsistent metadata

8Invalid header metadata

$Missing file

LBad checksum (storage subsystem)

HCorruption (storage subsystem)

DCorruption (missing metadata)

`Possible data loss (duplicate reparse data)

�4VS_VERSION_INFO��
�e
�e?"StringFileInfo�040904B0LCompanyNameMicrosoft Corporation�,FileDescriptionMicrosoft Data Deduplication Common Libraryh$FileVersion10.0.26100.1 (WinBuild.160101.0800):
InternalNameddputils.lib�.LegalCopyright� Microsoft Corporation. All rights reserved.JOriginalFilenameddputils.lib.muij%ProductNameMicrosoft� Windows� Operating System>
ProductVersion10.0.26100.1DVarFileInfo$Translation	�PADDINGXXPADDINGPADDINGXXPADDINGPADDINGXXPADDINGPADDINGXXPADDINGPADD

Anon7 - 2021