It differs from thin VMDKs in this way when ESXi requires it to be zeroed out. Thin VVols are thin only in the sense that they don’t reserve any data on the array until the guest OS writes to it. So on the FlashArray, there is no zeroing of VVols. So even pre-zeroing is generally useless because the array just removes the zeroes once they are written. For an array like the Pure Storage FlashArray, thick provisioning does not make much sense because it dedupes and removes zeroes. But what that means is really entirely up to the storage vendor. VVols do have the concept of thin and thick. So it is fairly understandable why one might choose EZT over thin. Of course once it has been written to, the impact is gone. Therefore these new writes have a latency penalty. So when a write is issued by a guest to a thin virtual disk to a previously unallocated segment, it must wait for first the thin disk to be grown then for it to be zeroed.
![vmdk s vmdk s](https://www.sqlservercentral.com/wp-content/uploads/2019/06/img_5cff45f54ae84.png)
Furthermore, before a newly allocated block can be written to, it must be zeroed (once again using WRITE SAME or traditional zeroing). This is why thin virtual disks “grow” over time on the VMFS. When the guest writes to it, the blocks are then allocated as needed. Instead, when created, the virtual disk only consumes one block on the VMFS. Thin: thin virtual disks are neither pre-allocated or pre-zeroed. Either way, the virtual disk cannot be used until that process is done. WRITE SAME is far more efficient and much faster in general.
![vmdk s vmdk s](https://www.iperiusbackup.net/wp-content/uploads/2019/09/open-vmdk-extract-files004.gif)
ESXi will either use WRITE SAME to issue pattern zero write requests to the entire virtual disk (if the storage supports WRITE SAME) or it will literally write zeroes to the entire disk. Furthermore, prior to being able to be used, a EZT virtual disk is fully zeroed out. Therefore if the virtual disk was created to be 50 GB, it consumes 50 GB of space on the VMFS. But at a high level the difference is two things.Įagerzeroedthick: When a EZT virtual disk is created, ESXi first allocates the entire virtual disk on the VMFS prior to it being used. Well I won’t spend a lot of time going into the difference between eagerzeroedthick and thin. And therefore many users opted to not use thin virtual disks because of it.
![vmdk s vmdk s](http://vmfsrecover.com/images/cid-mismatch-err.png)
So debates have raged on for quite some time around performance of virtual disk types and while the difference has diminished drastically over the years, eagerzeroedthick has always out-performed thin.