Coder Social home page Coder Social logo

eyden-tracer-04's Introduction

Practical Assignment 4

Name: Andrea Zablah and Carlos Salazar

Problem 4.1

Vertex Normals (Points 20)

Rather then storing a single geometry normal for a triangle, it is often useful to store at each vertex a corresponding vertex normal. The advantage is that if we have a hit point on a triangle, the shading normal can be smoothly interpolated between the vertex normals. If neighboring triangles share the same vertex normals, a smooth appearance can be generated over non-smooth tesselated geometry.
Proceed as follows:

  1. Fork the current repository.
  2. Modify the README.md file in your fork and put your name (or names if you work in a group) above.
  3. Extend CScene::ParseOBJ() to also support vertex normals. Take a look at the included .obj files in the data folder.
  4. Turn off BSP-support by disabling the flag ENABLE_BSP in types.h file or in cmake-gui.exe application.
  5. Your ray class is extended with two additional float values calles Ray::u and Ray::v.
  6. In bool CPrimTriangle::Intersect(Ray& ray), store the computed barycentric coordinates into Ray::u and Ray::v.
    Note: As long as your other classes (e.g. CPrimSphere) don’t need local surface coordinates, there is no need to compute them yet.
  7. In the framework is a new class CPrimTriangleSmooth which stores the vertex normals (na, nb and nc) additionaly to the original vertex positions.
  8. In Vec3f CPrimTriangleSmooth::GetNormal(const Ray& ray) use the u/v coordinates of the hitpoint to interpolate between the vertex normals.
    Note: Interpolating normalized vectors will not return a normalized vector! Make sure to normalize your interpolated normal!
  9. Test your implementation with cylinder16.obj and cone32.obj using the appropriate camera you can choose in Scene.h. Compare the difference between the regular and the smooth triangles.
    If everything is correct your images should look like this:

Problem 4.2

Procedual Bump Mapping (Points 20)

In the last exercise you have learned that the appearence of a surface can be changed by using a modified surface normal. In this exercise we will implement a technique called bump mapping which bumps the surface normal sideways such that a surface has the impression of beeing bumped. This often allows to generate the appearance of highly complex surface with only very few primitives. In order to do this, three parameters have to be known for each surface point:

  1. The original surface normal N.
  2. A local coordinate frame for this surface point. Though any coordinate frame can be used (as long as it is consistent), the usual way is to use the surface derivates in u and v direction, called dPdu and dPdv.
  3. The amount delta_u, delta_v of deviation along these tangent vectors. The deviation is usually either read from a texture, or is computed procedurally. The final normal during shading (also for reflections) is then N' = Normalized(N + delta_u * dPdu + delta_v * dPdv).

In this exercise, you will implement a very basic version of this:

  • As surface derivatives, use dPdu = (1, 0, 0) and dPdv = (0, 0, 1).
  • For the amount of deviation, use a simple procedural approach, by computing delta_u = 0.5 * cos(3 * H_x * sin(H_z)), delta_v = 0.5 * sin(13 * H_z). H denotes the hit point of the ray with the surface.

For your implementation, proceed as follows:

  • Implement the Shade-method in ShaderPhongBumpMapped.h by first copying the Shade()-method from the basic phong shader and then modifying the normal at the beginning of the Shade() function, following the guidlines given above. If your shader work correct you should get an image like this using the scene description in main.cpp:
    bump

Tip: The concept for a local coordinate frame will also be necessary for many other shading concepts, like e.g. procedural shading. For the exam, you will probably need such a concept, as the above hardcoded version will not be powerful enough.
As getting the correct surface derivates can be somewhat complicated, there is also a simpler (though not as powerful) way of getting an orthonormal surface coordinate frame from the surface normal, which is similar to our way of generating the orthonormal camera coordinate frame from the camera direction. For a detailed discussion about surface derivatives you can also read Matt Pharr’s book Physically Based Rendering.

Problem 4.3

Texturing (Points 30)

Until now we have only used one color per object. Nevertheless, in reality, e.g. in games, objects are very often colorful because of the usage of textures. This is also a way of making a surface look more geometrically complex. Usually, textures are used by storing texture coordinates at the vertices of each triangles, and interpolating those to find the correct texel that has to be used for a surface point.

  1. Turn BSP-support on
  2. In the framework is a new class CPrimTriangleSmoothTextured (derived from CPrimTriangleSmooth), that additionally has the three fields Vec2f ta, tb, tc, which correspond to the texture coordinates at vertex a, b, and c, respectively. Here we will use Vec2f’s to store the texture coordinates (not Vec3f), because they require only 2 coordinates (barycentric coordinates). Add support for texture coordinates to your parser (CScene::ParseOBJ()).
  3. Implemet the method Vec2f CPrimTriangleSmoothTextured::getUV(const Ray& ray) const which is now a virtual method in your primitive base class. In CPrimTriangleSmoothTextured, implement this function to return the x and y coordinates of the interpolated vertex texture coordinates. (For other primitives, just ignore it for now, we’ll only use texture-shaders with triangles for now).
  4. Implement the CShaderEyelightTextured::Shade() method to use the texture coordinates returned by getUV() and combine the texel color with the calculated eyelight color using the vector product.

Test your implementation on barney.obj with barney.bmp. If everything is correct your image should look like this: barney

Problem 4.4

Supersampling (Points 10 + 10 + 10)

A pixel actually corresponds to a square area. Currently you are sampling the pixels only at their center, which lead to aliasing. As you have learned in the lecture, the most simple way for removing aliasing artifacts from your image is supersampling, i.e. to shoot more than one ray per pixels. The three most frequently used supersampling strategies are:

Regular Sampling: The Pixel is subdivided into n = m x m equally sized regions, which are sampled in the middle:
((i+0.5)/m, (j+0.5)/n) for i,j=[0 .. m-1]

Random Sampling: The Pixel is sampled by n randomly placed samples ei in [0; 1)
(ei,1,ei,2) for i=[0 .. n-1]

Stratified Sampling: Stratified sampling is a combination of regular and random sampling. One sample is randomly placed in each of the n = m x m regions with ei,1,ei,2 in [0, 1)
((i+ei)/m, (j+ej)/m) for i,j=[0 .. m-1]

In this exercise your task is to implement these sampling strategies:

  • In the framework you can find an abstract base class CSampleGenerator with one single virtual method void SampleGenerator::GetSamples(int n, float *u, float *v, float *weight), which is supposed to works as follows: n is the number of samples to be generated for a pixel. One sample consists of two coordinates (u, v) that specify a position on a pixel. The n samples generated are then to be returned in the u and v arrays, where (u, v) should be in the domain [0 .. 1) X [0 .. 1). The weights for the individual samples should sum up to 1, here just use uniform weights with weight[i]=1.0f/n.
  • In your main loop, produce n samples, and fire n rays through the pixel at the respective sample position. The resulting color values must be weighted by weight[i] and summed up, which yields the final pixel result.
  • Implement the getSamples() method in SampleGeneratorRegular.h, SampleGeneratorRandom.h, and SampleGeneratorStratified.h which are derived classes from CSampleGenerator. Use ground.obj and cb.bmp to render your image with 4 samples and compare them to the following images: (regular) (random) (stratified)

regularrandomstratified

eyden-tracer-04's People

Contributors

ereator avatar carle13 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.