How is a Gaussian blur filter used in shader?

How is a Gaussian blur filter used in shader?

In the second pass, another one-dimensional kernel is used to blur in the remaining direction. The resulting effect is the same as convolving with a two-dimensional kernel in a single pass, but requires fewer calculations. Here is the complete vertical gaussian filter used in the demo (you can download the demo at the end of the post):

Do you need to raterize a quad in GLSL shader?

If each pixel is handled by a separate thread, then you don’t need to do this loop anymore : you just raterize a quad, and apply a pixel shader that reads a texture at the current rasterized point, and ouput to the render target (or the screen) the transformed pixel value.

Can you treat all pixels independantly in OpenGL?

Most of the time, you can treat pixels independantly. For instance, increasing the contrast of an image usually requires you to loop over all pixels and apply an affine transform of the pixel values.

Why do I use GLSL instead of CUDA?

The first obvious answer is that you gain parallelism. Now, why using GLSL rather than, say CUDA which is more flexible? GLSL doesn’t require you to have an NVIDIA graphics card, so it’s a much more portable solution (you’d still have the option of OpenCL though).

How does a fragment shader work in GLSL?

Fragment shaders can access the fragment position, and all the interpolated data computed in the rasterization process. The shader performs computations based on these attributes and the pixels position. The pixel’s X,Y position is fixed, i.e. a fragment shader can not choose to write the attributes of other pixel.

Can a fragment shader change the depth of a pixel?

The shader performs computations based on these attributes and the pixels position. The pixel’s X,Y position is fixed, i.e. a fragment shader can not choose to write the attributes of other pixel. However, it can change the pixel’s depth (the Z value).

Is the fragment shader access to the framebuffer?

The fragment shader does not have access to the framebuffer, neither at the current pixel’s position, nor at any other pixel position. The fragment shader, similarly to the vertex shader, only has access to the current pixel and its associated data.

Can a fragment shader be used in rasterization?

The fragments that were found in the rasterization process, and whose attributes were computed in the interpolation phase, are now fed, one by one, to the fragment shader. This shader is not optional, unless transform feedback is being used.