Recently, I read one paper named Rendered Private: Making GLSL Execution Uniform to Prevent WebGL-based Browser Fingerprinting, which belongs to Shujiang Wu et al. from Johns Hopkins University and is published on USENIX Security 19.

This paper introduces a technique to allow all of the websites to have the same WebGL fingerprinting. What makes the fingerprinting different is the floating operation. Floating operations are different among different computers because the hardwares such as CPU, GPU have the error when produced. When the WebGL renders the images of a website, the location of the same pixel in the same image will be different in different computers and the color of it will be different as well because they all controlled by the floating operations. This figure demonstrates the overflow when an image is rendered by WebGL. All of the three parts have the floating operations which make the results are different among computers.

Graphics Pipeline

So, they choose a method to make it the same in different computers which makes the rotation in Vertex Shader part transfer to the JavaScript part and puts the operation in Shape Assembly and Rasterization part to Fragment Shader part. After that, the calculation will be uniform among computers.

However, though this operation looks like making a uniform, whether or not the result is accurate is a problem. For example, when we play a website game needing to connect the server and interact with others on a map, if we send back the location of us and judge the interaction in the server, this operation will make the conflict between the client and server. Just like we want to walk straight but maybe a slash in real. Because there won't be any rendering in the server, but we need to render the interface in the client part. That is to say, we can't use this method on this kind of website.

On another hand, floating operations can also cause some problems in some machine learning fields. Shaofeng Li et al. have a paper named Invisible Backdoor Attacks Against Deep Neural Networks. They create a kind of backdoor attack by adding some invisible pixels to the images. This picture illustrates that people cannot distinguish the difference between the original one and the poisoned one.
(a) Original image, (b) Poisoned image with L2-norm as 2, and (c) Poisoned image with L0-norm as 2.

Based on Shujiang's conclusion, nevertheless, this kind of backdoor will be different in different computers. So, when we train the backdoor with some images in computer A and we test or use this model on computer B, the pixels of these images' backdoor parts in computer B will be different from the training ones.

Also, as we know, many attacks are used adding noise which is too subtle to distinguish such as voice or image. I am not sure whether or not the model can recognize this successfully. Need more test experiments to proof.