2023-04-26 19:56:07 +00:00
|
|
|
|
---
|
|
|
|
|
geometry: margin=2cm
|
|
|
|
|
output: pdf_document
|
|
|
|
|
title: Exam 2
|
|
|
|
|
subtitle: CSCI 5607
|
|
|
|
|
date: \today
|
|
|
|
|
|
|
|
|
|
author: |
|
|
|
|
|
| Michael Zhang
|
|
|
|
|
| zhan4854@umn.edu $\cdot$ ID: 5289259
|
|
|
|
|
---
|
|
|
|
|
|
|
|
|
|
\renewcommand{\c}[1]{\textcolor{gray}{#1}}
|
2023-05-03 09:14:29 +00:00
|
|
|
|
\renewcommand{\r}[1]{\textcolor{red}{#1}}
|
2023-04-30 21:01:21 +00:00
|
|
|
|
\newcommand{\now}[1]{\textcolor{blue}{#1}}
|
2023-05-01 00:44:00 +00:00
|
|
|
|
\newcommand{\todo}[0]{\textcolor{red}{\textbf{TODO}}}
|
2023-04-26 19:56:07 +00:00
|
|
|
|
|
2023-05-03 09:14:29 +00:00
|
|
|
|
[ 2 3 8 9 ]
|
2023-05-01 06:05:04 +00:00
|
|
|
|
|
2023-04-26 19:56:07 +00:00
|
|
|
|
## Reflection and Refraction
|
|
|
|
|
|
2023-05-02 07:15:45 +00:00
|
|
|
|
1. \c{Consider a sphere $S$ made of solid glass ($\eta$ = 1.5) that has radius
|
|
|
|
|
$r = 3$ and is centered at the location $s = (2, 2, 10)$ in a vaccum ($\eta =
|
2023-04-26 19:56:07 +00:00
|
|
|
|
1.0$). If a ray emanating from the point $e = (0, 0, 0)$ intersects $S$ at a
|
2023-04-30 21:01:21 +00:00
|
|
|
|
point $p = (1, 4, 8)$:}
|
2023-04-26 19:56:07 +00:00
|
|
|
|
|
2023-05-03 09:14:29 +00:00
|
|
|
|
Legend for the rest of the problem:
|
|
|
|
|
|
|
|
|
|
![](1.jpg){width=50%}
|
|
|
|
|
|
2023-04-30 21:01:21 +00:00
|
|
|
|
a. \c{(2 points) What is the angle of incidence $\theta_i$?}
|
2023-04-26 19:56:07 +00:00
|
|
|
|
|
2023-05-03 09:14:29 +00:00
|
|
|
|
The incoming ray is in the direction $I = v_0 = p - e = (1, 4, 8)$, and the
|
|
|
|
|
normal at that point is:
|
|
|
|
|
|
|
|
|
|
- $N = p - s = (1, 4, 8) - (2, 2, 10) = (-1, 2, -2)$
|
|
|
|
|
|
|
|
|
|
The angle can be found by taking the opposite of the incoming ray $-I$ and
|
|
|
|
|
using the formula:
|
|
|
|
|
|
|
|
|
|
- $\cos \theta_i = \frac{-I \cdot N}{\|I\|_2 \|N\|_2}
|
|
|
|
|
= \frac{(-1, -4, -8) \cdot (-1, 2, -2)}{9 \cdot 3} = \frac{1 - 8 + 16}{27} = \frac{1}{3}$.
|
|
|
|
|
|
|
|
|
|
So the angle $\boxed{\theta_i = \cos^{-1}\left(\frac{1}{3}\right)}$.
|
2023-04-26 19:56:07 +00:00
|
|
|
|
|
2023-04-30 21:01:21 +00:00
|
|
|
|
b. \c{(1 points) What is the angle of reflection $\theta_r$?}
|
2023-04-26 19:56:07 +00:00
|
|
|
|
|
2023-04-30 21:01:21 +00:00
|
|
|
|
The angle of reflection always equals the angle of incidence, $\theta_r =
|
2023-05-03 09:14:29 +00:00
|
|
|
|
\theta_i = \boxed{\cos^{-1}\left(\frac{1}{3}\right)}$.
|
2023-04-26 19:56:07 +00:00
|
|
|
|
|
2023-04-30 21:01:21 +00:00
|
|
|
|
c. \c{(3 points) What is the direction of the reflected ray?}
|
|
|
|
|
|
2023-05-03 09:14:29 +00:00
|
|
|
|
The reflected ray can be found by first projecting the incident ray $v_0$ onto
|
|
|
|
|
the normalized normal $n$, which is:
|
|
|
|
|
|
|
|
|
|
- $proj = n \times |v_0|\cos(\theta_i) = \left(\frac{1}{3}, -\frac{2}{3}, \frac{2}{3}\right) \times 9 \times \frac{1}{3} = (-1, 2, -2)$
|
|
|
|
|
|
|
|
|
|
Then, we know the point on N where this happened is:
|
|
|
|
|
|
|
|
|
|
- $nx = p + proj = (1, 4, 8) + (-1, 2, -2) = (0, 6, 6)$.
|
2023-04-30 21:01:21 +00:00
|
|
|
|
|
|
|
|
|
Now, we can subtract this point from where the ray originated to know the
|
|
|
|
|
direction to add in the other direction, which is still $(0, 6, 6)$ in this
|
2023-05-03 09:14:29 +00:00
|
|
|
|
case since the ray starts at the origin. Adding this to the point $nx$ gets
|
2023-04-30 21:01:21 +00:00
|
|
|
|
us $(0, 12, 12)$, which means a point from the origin will get reflected to
|
|
|
|
|
$(0, 12, 12)$.
|
|
|
|
|
|
|
|
|
|
Finally, subtract the point to get the final answer $(0, 12, 12) - (1, 4, 8)
|
|
|
|
|
= \boxed{(-1, 8, 4)}$.
|
|
|
|
|
|
|
|
|
|
d. \c{(3 points) What is the angle of transmission $\theta_t$?}
|
|
|
|
|
|
2023-05-03 09:14:29 +00:00
|
|
|
|
Using Snell's law, we know that
|
2023-04-26 19:56:07 +00:00
|
|
|
|
|
2023-05-03 09:14:29 +00:00
|
|
|
|
\begin{align}
|
|
|
|
|
\eta_i \sin \theta_i &= \eta_t \sin \theta_t \\
|
|
|
|
|
\r{1.0} \times \sin \theta_i &= \r{1.5} \times \sin \theta_t \\
|
|
|
|
|
1.0 \times \sin(\r{ \cos^{-1}(-\frac{1}{3}) }) &= 1.5 \times \sin \theta_t \\
|
|
|
|
|
1.0 \times \r{ \frac{2\sqrt{2}}{3} } &= 1.5 \times \sin \theta_t \\
|
|
|
|
|
\r{ \frac{1.0}{1.5} } \times \frac{2\sqrt{2}}{3} &= \sin \theta_t \\
|
|
|
|
|
\r{ \frac{4}{9} } \sqrt{2} &= \sin \theta_t \\
|
|
|
|
|
\theta_t &= \sin^{-1} \left(\frac{4}{9}\sqrt{2}\right) \approx
|
|
|
|
|
\boxed{0.67967 \ldots}
|
|
|
|
|
\end{align}
|
2023-05-01 06:05:04 +00:00
|
|
|
|
|
2023-05-03 09:14:29 +00:00
|
|
|
|
e. \c{(4 points) What is the direction of the transmitted ray?}
|
2023-05-01 06:05:04 +00:00
|
|
|
|
|
2023-05-03 09:14:29 +00:00
|
|
|
|
We can just add together $s - p$ and $v_5$ in the diagram above, these two
|
|
|
|
|
make up the orthogonal basis of the transmitted ray $v_4$. The lengths of
|
|
|
|
|
these two are not important, but related through the angle $\theta_t$.
|
|
|
|
|
|
|
|
|
|
Also note that in the diagram, the sphere is drawn as a 2D circle. But in
|
|
|
|
|
reality, the number of vectors perpendicular to $s - p$ is infinite. The
|
|
|
|
|
$v_5$ we're looking for is parallel with $v_1$, so we can just use
|
|
|
|
|
$\frac{v_1}{\|v_1\|_2}$ as our unit vector for $v_5$.
|
|
|
|
|
|
|
|
|
|
\begin{align}
|
|
|
|
|
v_4 &= (s - p) + \tan \theta_t \times \|s - p\|_2 \times \frac{v_1}{\|v_1\|_2} \\
|
|
|
|
|
v_4 &= (\r{ (2, 2, 10) - (1, 4, 8) }) + \tan \theta_t \times \|s - p\|_2 \times \frac{v_1}{\|v_1\|_2} \\
|
|
|
|
|
v_4 &= \r{ (1, -2, 2) } + \tan \theta_t \times \|\r{(1, -2, 2)}\|_2 \times \frac{v_1}{\|v_1\|_2} \\
|
|
|
|
|
v_4 &= (1, -2, 2) + \tan \theta_t \times \r{3} \times \frac{v_1}{\|v_1\|_2} \\
|
|
|
|
|
v_4 &= (1, -2, 2) + \tan \theta_t \times 3 \times \frac{\r{(0, 6, 6)}}{\|\r{(0, 6, 6)}\|_2} \\
|
|
|
|
|
v_4 &= (1, -2, 2) + \tan \theta_t \times 3 \times \r{\left(0, \frac{\sqrt{2}}{2}, \frac{\sqrt{2}}{2}\right)} \\
|
|
|
|
|
v_4 &= (1, -2, 2) + \tan \theta_t \times \left(0, \r{\frac{3}{2}\sqrt{2}}, \r{\frac{3}{2}\sqrt{2}} \right) \\
|
|
|
|
|
v_4 &= (1, -2, 2) + \r{\frac{4}{7}\sqrt{2}} \times \left(0, \frac{3}{2}\sqrt{2}, \frac{3}{2}\sqrt{2} \right) \\
|
|
|
|
|
v_4 &= (1, -2, 2) + \r{\left(0, \frac{12}{7}, \frac{12}{7} \right)} \\
|
|
|
|
|
v_4 &= \r{\left(1, \frac{-2}{7}, \frac{26}{7} \right)} \\
|
|
|
|
|
\end{align}
|
|
|
|
|
|
|
|
|
|
This can be normalized into $\boxed{\left(\frac{7}{27}, -\frac{2}{27},
|
|
|
|
|
\frac{26}{27}\right)}$.
|
2023-05-01 06:05:04 +00:00
|
|
|
|
|
2023-04-26 19:56:07 +00:00
|
|
|
|
## Geometric Transformations
|
|
|
|
|
|
|
|
|
|
2. \c{(8 points) Consider the airplane model below, defined in object
|
|
|
|
|
coordinates with its center at $(0, 0, 0)$, its wings aligned with the $\pm
|
|
|
|
|
x$ axis, its tail pointing upwards in the $+y$ direction and its nose facing
|
|
|
|
|
in the $+z$ direction. Derive a sequence of model transformation matrices
|
|
|
|
|
that can be applied to the vertices of the airplane to position it in space
|
2023-04-30 21:03:33 +00:00
|
|
|
|
at the location $p = (4, 4, 7)$, with a direction of flight $w = (2, 1, -2)$
|
2023-04-30 22:10:11 +00:00
|
|
|
|
and the wings aligned with the direction $d = (-2, 2, -1)$.}
|
2023-04-26 19:56:07 +00:00
|
|
|
|
|
2023-05-01 06:05:04 +00:00
|
|
|
|
The order we want is (1) do all the rotations, and then (2) translate to the
|
|
|
|
|
spot we want. The rotation is done in multiple steps:
|
|
|
|
|
|
|
|
|
|
- First we want to make sure the nose of the plane points in the correct
|
|
|
|
|
direction in the $xz$ plane (rotating around the $y$ axis). The desired
|
|
|
|
|
resulting direction is $(2, y, -2)$, so that means for a nose currently facing
|
|
|
|
|
the $+z$ direction which is $(0, y, 1)$, we want to rotate around by around
|
|
|
|
|
$-\frac{3}{4}\pi$. The transformation matrix is:
|
|
|
|
|
|
|
|
|
|
$$
|
|
|
|
|
M_1 = \begin{bmatrix}
|
|
|
|
|
1 & 0 & 0 & 0 \\
|
|
|
|
|
0 & 1 & 0 & 0 \\
|
|
|
|
|
0 & 0 & 1 & 0 \\
|
|
|
|
|
0 & 0 & 0 & 1 \\
|
|
|
|
|
\end{bmatrix}
|
|
|
|
|
$$
|
|
|
|
|
|
|
|
|
|
- Then we want to rotate the plane vertically, so it's pointing in the right
|
|
|
|
|
direction.
|
2023-04-26 19:56:07 +00:00
|
|
|
|
|
|
|
|
|
$$
|
|
|
|
|
\begin{bmatrix}
|
|
|
|
|
1 & 0 & 0 & x \\
|
|
|
|
|
0 & 1 & 0 & y \\
|
|
|
|
|
0 & 0 & 1 & z \\
|
|
|
|
|
0 & 0 & 0 & 1 \\
|
|
|
|
|
\end{bmatrix}
|
|
|
|
|
=
|
|
|
|
|
\begin{bmatrix}
|
|
|
|
|
1 & 0 & 0 & 4 \\
|
|
|
|
|
0 & 1 & 0 & 4 \\
|
|
|
|
|
0 & 0 & 1 & 7 \\
|
|
|
|
|
0 & 0 & 0 & 1 \\
|
|
|
|
|
\end{bmatrix}
|
|
|
|
|
$$
|
|
|
|
|
|
|
|
|
|
Since the direction of flight was originally $(0, 0, 1)$, we have to
|
|
|
|
|
transform it to $(2, 1, -2)$.
|
|
|
|
|
|
2023-05-01 06:05:04 +00:00
|
|
|
|
3. \c{Consider the earth model shown below, which is defined in object
|
|
|
|
|
coordinates with its center at $(0, 0, 0)$, the vertical axis through the
|
|
|
|
|
north pole aligned with the direction $(0, 1, 0)$, and a horizontal plane
|
|
|
|
|
through the equator that is spanned by the axes $(1, 0, 0)$ and $(0, 0, 1)$.}
|
|
|
|
|
|
|
|
|
|
a. \c{(3 points) What model transformation matrix could you use to tilt the
|
|
|
|
|
vertical axis of the globe by $23.5^\circ$ away from $(0, 1, 0)$, to achieve
|
|
|
|
|
the pose shown in the image on the right?}
|
|
|
|
|
|
|
|
|
|
You could use a 3D rotation matrix. Since the axis of rotation is the
|
|
|
|
|
$z$-axis, the rotation matrix would look like:
|
|
|
|
|
|
|
|
|
|
$$
|
|
|
|
|
M =
|
|
|
|
|
\begin{bmatrix}
|
|
|
|
|
\cos(23.5^\circ) & \sin(23.5^\circ) & 0 & 0 \\
|
|
|
|
|
-\sin(23.5^\circ) & \cos(23.5^\circ) & 0 & 0 \\
|
|
|
|
|
0 & 0 & 1 & 0 \\
|
|
|
|
|
0 & 0 & 0 & 1 \\
|
|
|
|
|
\end{bmatrix}
|
|
|
|
|
$$
|
|
|
|
|
|
|
|
|
|
b. \c{(5 points) What series of rotation matrices could you apply to the
|
|
|
|
|
globe model to make it spin about its tilted axis of rotation, as suggested
|
|
|
|
|
in the image on the right?}
|
|
|
|
|
|
|
|
|
|
One way would be to rotate it back to its normal orientation, apply the spin
|
|
|
|
|
by whatever angle $\theta(t)$ it was spinning at at time $t$, and then put it
|
|
|
|
|
back into its $23.5^\circ$ orientation. This rotation is done around the
|
|
|
|
|
$y$-axis, so the matrix looks like:
|
|
|
|
|
|
|
|
|
|
$$
|
|
|
|
|
M' =
|
|
|
|
|
M^{-1}
|
|
|
|
|
\begin{bmatrix}
|
|
|
|
|
\cos(\theta(t)) & 0 & \sin(\theta(t)) & 0 \\
|
|
|
|
|
0 & 1 & 0 & 0 \\
|
|
|
|
|
-\sin(\theta(t)) & 0 & \cos(\theta(t)) & 0 \\
|
|
|
|
|
0 & 0 & 0 & 1 \\
|
|
|
|
|
\end{bmatrix}
|
|
|
|
|
M
|
|
|
|
|
$$
|
|
|
|
|
|
|
|
|
|
c. \c{[5 points extra credit] What series of rotation matrices could you use
|
|
|
|
|
to send the tilted, spinning globe model on a circular orbit of radius $r$
|
|
|
|
|
around the point $(0, 0, 0)$ within the $xz$ plane, as illustrated below?}
|
|
|
|
|
|
|
|
|
|
In the image, the globe itself does not rotate, but I'm going to assume it
|
|
|
|
|
revolves around the sun at a different angle $\phi(t)$. The solution here
|
|
|
|
|
would be to
|
2023-04-30 21:01:21 +00:00
|
|
|
|
|
2023-04-30 22:10:11 +00:00
|
|
|
|
## The Camera/Viewing Transformation
|
|
|
|
|
|
|
|
|
|
4. \c{Consider the viewing transformation matrix $V$ that enables all of the
|
|
|
|
|
vertices in a scene to be expressed in terms of a coordinate system in which
|
|
|
|
|
the eye is located at $(0, 0, 0)$, the viewing direction ($-n$) is aligned
|
2023-05-01 00:44:00 +00:00
|
|
|
|
with the $-z$ axis $(0, 0, -1)$, and the camera's 'up' direction (which
|
2023-04-30 22:10:11 +00:00
|
|
|
|
controls the roll of the view) is aligned with the $y$ axis (0, 1, 0).}
|
|
|
|
|
|
2023-05-01 00:44:00 +00:00
|
|
|
|
a. \c{(4 points) When the eye is located at $e = (2, 3, 5)$, the camera is
|
2023-04-30 22:10:11 +00:00
|
|
|
|
pointing in the direction $(1, -1, -1)$, and the camera's 'up' direction is
|
2023-05-01 00:44:00 +00:00
|
|
|
|
$(0, 1, 0)$, what are the entries in $V$?}
|
2023-04-30 22:10:11 +00:00
|
|
|
|
|
2023-05-01 00:44:00 +00:00
|
|
|
|
- Viewing direction is $(1, -1, -1)$.
|
|
|
|
|
- Normalized $n = (\frac{1}{\sqrt{3}}, -\frac{1}{\sqrt{3}}, -\frac{1}{\sqrt{3}})$.
|
2023-05-02 07:15:45 +00:00
|
|
|
|
- $u = up \times n = (\frac{1}{\sqrt{2}}, 0, \frac{1}{\sqrt{2}})$.
|
|
|
|
|
- $v = n \times u = (\frac{\sqrt{6}}{6}, \frac{2}{\sqrt{6}}, -\frac{\sqrt{6}}{6})$
|
|
|
|
|
- $d_x = - (eye \cdot u) = - (2 \times \frac{1}{\sqrt{2}} + 5 \times \frac{1}{\sqrt{2}}) = -\frac{7}{\sqrt{2}}$
|
|
|
|
|
- $d_y = - (eye \cdot v) = - (2 \times \frac{1}{\sqrt{6}} + 3 \times
|
|
|
|
|
\frac{2}{\sqrt{6}} - 5 \times \frac{1}{\sqrt{6}}) = -\frac{3}{\sqrt{6}}$
|
|
|
|
|
- $d_z = - (eye \cdot n) = - (2 \times \frac{1}{\sqrt{3}} - 3 \times
|
|
|
|
|
\frac{1}{\sqrt{3}} - 5 \times \frac{1}{\sqrt{3}}) = -\frac{6}{\sqrt{3}}$
|
2023-05-01 00:44:00 +00:00
|
|
|
|
|
|
|
|
|
$$
|
|
|
|
|
\begin{bmatrix}
|
2023-05-02 07:15:45 +00:00
|
|
|
|
\frac{1}{\sqrt{2}} & 0 & \frac{1}{\sqrt{2}} & -\frac{7}{\sqrt{2}} \\
|
|
|
|
|
\frac{1}{\sqrt{6}} & \frac{2}{\sqrt{6}} & -\frac{1}{\sqrt{6}} & -\frac{3}{\sqrt{6}} \\
|
|
|
|
|
-\frac{1}{\sqrt{3}} & \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{3}} & -\frac{6}{\sqrt{3}} \\
|
|
|
|
|
0 & 0 & 0 & 1 \\
|
2023-05-01 00:44:00 +00:00
|
|
|
|
\end{bmatrix}
|
|
|
|
|
$$
|
|
|
|
|
|
2023-05-02 07:15:45 +00:00
|
|
|
|
Also solved using a Python script:
|
|
|
|
|
|
|
|
|
|
```py
|
|
|
|
|
def view_matrix(camera_pos, view_dir, up_dir):
|
|
|
|
|
n = unit(-view_dir)
|
|
|
|
|
u = unit(np.cross(up_dir, n))
|
|
|
|
|
v = np.cross(n, u)
|
|
|
|
|
return np.array([
|
|
|
|
|
[u[0], u[1], u[2], -np.dot(camera_pos, u)],
|
|
|
|
|
[v[0], v[1], v[2], -np.dot(camera_pos, v)],
|
|
|
|
|
[n[0], n[1], n[2], -np.dot(camera_pos, n)],
|
|
|
|
|
[0, 0, 0, 1],
|
|
|
|
|
])
|
|
|
|
|
|
|
|
|
|
camera_pos = np.array([2, 3, 5])
|
|
|
|
|
view_dir = np.array([1, -1, -1])
|
|
|
|
|
up_dir = np.array([0, 1, 0])
|
|
|
|
|
V = view_matrix(camera_pos, view_dir, up_dir)
|
|
|
|
|
print(V)
|
|
|
|
|
```
|
2023-05-01 00:44:00 +00:00
|
|
|
|
|
|
|
|
|
b. \c{(2 points) How will this matrix change if the eye moves forward in the
|
2023-04-30 22:10:11 +00:00
|
|
|
|
direction of view? [which elements in V will stay the same? which elements
|
2023-05-01 00:44:00 +00:00
|
|
|
|
will change and in what way?]}
|
2023-04-30 22:10:11 +00:00
|
|
|
|
|
2023-05-01 00:44:00 +00:00
|
|
|
|
If the eye moves forward, the eye _position_ and everything that depends on it
|
|
|
|
|
will change, while everything else doesn't.
|
2023-04-30 22:10:11 +00:00
|
|
|
|
|
2023-05-01 00:44:00 +00:00
|
|
|
|
| $n$ | $u$ | $v$ | $d$ |
|
|
|
|
|
| ---- | ---- | ---- | --------- |
|
|
|
|
|
| same | same | same | different |
|
|
|
|
|
|
|
|
|
|
The $n$ is the same because the viewing direction does not change.
|
|
|
|
|
|
|
|
|
|
c. \c{(2 points) How will this matrix change if the viewing direction spins
|
|
|
|
|
in the clockwise direction around the camera's 'up' direction? [which
|
|
|
|
|
elements in V will stay the same? which elements will change and in what
|
|
|
|
|
way?]}
|
|
|
|
|
|
|
|
|
|
In this case, the eye _position_ stays the same, and everything else changes.
|
|
|
|
|
|
|
|
|
|
| $n$ | $u$ | $v$ | $d$ |
|
|
|
|
|
| --------- | --------- | --------- | ---- |
|
|
|
|
|
| different | different | different | same |
|
|
|
|
|
|
|
|
|
|
d. \c{(2 points) How will this matrix change if the viewing direction rotates
|
2023-04-30 22:10:11 +00:00
|
|
|
|
directly upward, within the plane defined by the viewing and 'up' directions?
|
|
|
|
|
[which elements in V will stay the same? which elements will change and in
|
2023-05-01 00:44:00 +00:00
|
|
|
|
what way?]}
|
|
|
|
|
|
|
|
|
|
In this case, the eye _position_ stays the same, and everything else changes.
|
|
|
|
|
|
|
|
|
|
| $n$ | $u$ | $v$ | $d$ |
|
|
|
|
|
| --------- | --------- | --------- | ---- |
|
|
|
|
|
| different | different | different | same |
|
|
|
|
|
|
|
|
|
|
5. \c{Suppose a viewer located at the point $(0, 0, 0)$ is looking in the $-z$
|
|
|
|
|
direction, with no roll ['up' = $(0, 1 ,0)$], towards a cube of width 2,
|
|
|
|
|
centered at the point $(0, 0, -5)$, whose sides are colored: red at the plane
|
|
|
|
|
$x = 1$, cyan at the plane $x = -1$, green at the plane $y = 1$, magenta at
|
|
|
|
|
the plane $y = -1$, blue at the plane $z = -4$, and yellow at the plane $z =
|
|
|
|
|
-6$.}
|
|
|
|
|
|
|
|
|
|
a. \c{(1 point) What is the color of the cube face that the user sees?}
|
|
|
|
|
|
|
|
|
|
\boxed{\textrm{Blue}}
|
|
|
|
|
|
|
|
|
|
b. \c{(3 points) Because the eye is at the origin, looking down the $-z$ axis
|
|
|
|
|
with 'up' = $(0,1,0)$, the viewing transformation matrix $V$ in this case is
|
|
|
|
|
the identity $I$. What is the model matrix $M$ that you could use to rotate
|
|
|
|
|
the cube so that when the image is rendered, it shows the red side of the
|
|
|
|
|
cube?}
|
|
|
|
|
|
|
|
|
|
You would have to do a combination of (1) translate to the origin, (2) rotate
|
|
|
|
|
around the origin, and then (3) untranslate back. This way, the eye position
|
|
|
|
|
doesn't change.
|
|
|
|
|
|
|
|
|
|
$$
|
|
|
|
|
M =
|
|
|
|
|
\begin{bmatrix}
|
|
|
|
|
1 & 0 & 0 & 0 \\
|
|
|
|
|
0 & 1 & 0 & 0 \\
|
|
|
|
|
0 & 0 & 1 & -5 \\
|
|
|
|
|
0 & 0 & 0 & 1 \\
|
|
|
|
|
\end{bmatrix} \cdot
|
|
|
|
|
\begin{bmatrix}
|
|
|
|
|
0 & 0 & -1 & 0 \\
|
|
|
|
|
0 & 1 & 0 & 0 \\
|
|
|
|
|
1 & 0 & 0 & 0 \\
|
|
|
|
|
0 & 0 & 0 & 1 \\
|
|
|
|
|
\end{bmatrix} \cdot
|
|
|
|
|
\begin{bmatrix}
|
|
|
|
|
1 & 0 & 0 & 0 \\
|
|
|
|
|
0 & 1 & 0 & 0 \\
|
|
|
|
|
0 & 0 & 1 & 5 \\
|
|
|
|
|
0 & 0 & 0 & 1 \\
|
|
|
|
|
\end{bmatrix}
|
|
|
|
|
=
|
|
|
|
|
\boxed{\begin{bmatrix}
|
|
|
|
|
0 & 0 & -1 & -5 \\
|
|
|
|
|
0 & 1 & 0 & 0 \\
|
|
|
|
|
1 & 0 & 0 & -5 \\
|
|
|
|
|
0 & 0 & 0 & 1 \\
|
|
|
|
|
\end{bmatrix}}
|
|
|
|
|
$$
|
2023-04-30 22:10:11 +00:00
|
|
|
|
|
2023-05-01 00:44:00 +00:00
|
|
|
|
To verify this, testing with an example point $(1, 1, -4)$ yields:
|
|
|
|
|
|
|
|
|
|
$$
|
|
|
|
|
\begin{bmatrix}
|
|
|
|
|
0 & 0 & -1 & -5 \\
|
|
|
|
|
0 & 1 & 0 & 0 \\
|
|
|
|
|
1 & 0 & 0 & -5 \\
|
|
|
|
|
0 & 0 & 0 & 1 \\
|
|
|
|
|
\end{bmatrix}
|
|
|
|
|
\cdot
|
|
|
|
|
\begin{bmatrix}
|
|
|
|
|
1 \\ 1 \\ -4 \\ 1
|
|
|
|
|
\end{bmatrix}
|
|
|
|
|
=
|
|
|
|
|
\begin{bmatrix}
|
|
|
|
|
-1 \\ 1 \\ -4 \\ 1
|
|
|
|
|
\end{bmatrix}
|
|
|
|
|
$$
|
|
|
|
|
|
|
|
|
|
c. \c{(4 points) Suppose now that you want to leave the model matrix $M$ as
|
|
|
|
|
the identity. What is the viewing matrix $V$ that you would need to use to
|
|
|
|
|
render an image of the scene from a re-defined camera configuration so that
|
|
|
|
|
when the scene is rendered, it shows the red side of the cube? Where is the
|
|
|
|
|
eye in this case and in what direction is the camera looking?}
|
|
|
|
|
|
|
|
|
|
For this, a different eye position will have to be used. Instead of looking
|
|
|
|
|
from the origin, you could view it from the red side, and then change the
|
|
|
|
|
direction so it's still pointing at the cube.
|
|
|
|
|
|
|
|
|
|
- eye is located at $(5, 0, -5)$
|
|
|
|
|
- viewing direction is $(-1, 0, 0)$
|
|
|
|
|
- $n = (1, 0, 0)$
|
|
|
|
|
- $u = up \times n = (0, 0, -1)$
|
|
|
|
|
- $v = n \times u = (0, 1, 0)$
|
|
|
|
|
- $d = (-5, 0, -5)$
|
|
|
|
|
|
|
|
|
|
The final viewing matrix is $\boxed{\begin{bmatrix}
|
|
|
|
|
0 & 0 & -1 & -5 \\
|
|
|
|
|
0 & 1 & 0 & 0 \\
|
|
|
|
|
1 & 0 & 0 & -5 \\
|
|
|
|
|
0 & 0 & 0 & 1 \\
|
|
|
|
|
\end{bmatrix}}$. Turns out it's the same matrix! Wow!
|
2023-04-30 22:10:11 +00:00
|
|
|
|
|
|
|
|
|
## The Projection Transformation
|
|
|
|
|
|
2023-05-01 06:05:04 +00:00
|
|
|
|
6. \c{Consider a cube of width $2\sqrt{3}$ centered at the point $(0, 0,
|
|
|
|
|
-3\sqrt{3})$, whose faces are colored light grey on the top and bottom $(y =
|
2023-05-02 07:15:45 +00:00
|
|
|
|
\pm\sqrt{3})$, dark grey on the front and back ($z = -2\sqrt{3}$ and $z =
|
|
|
|
|
-4\sqrt{3}$), red on the right $(x = \sqrt{3})$, and green on the left $(x =
|
|
|
|
|
-\sqrt{3})$.}
|
2023-05-01 06:05:04 +00:00
|
|
|
|
|
|
|
|
|
a. \c{Show how you could project the vertices of this cube to the plane $z =
|
|
|
|
|
0$ using an orthographic parallel projection:}
|
|
|
|
|
|
2023-05-02 07:15:45 +00:00
|
|
|
|
i) \c{(2 points) Where will the six vertex locations be after such a
|
|
|
|
|
projection, omitting the normalization step?}
|
|
|
|
|
|
2023-05-03 09:14:29 +00:00
|
|
|
|
- $[\begin{matrix}- \sqrt{3} & - \sqrt{3} & - 4 \sqrt{3}\end{matrix}]$ $\rightarrow$ $[\begin{matrix}-1 & -1 & 7\end{matrix}]$
|
|
|
|
|
- $[\begin{matrix}- \sqrt{3} & \sqrt{3} & - 4 \sqrt{3}\end{matrix}]$ $\rightarrow$ $[\begin{matrix}-1 & 1 & 7\end{matrix}]$
|
|
|
|
|
- $[\begin{matrix}\sqrt{3} & - \sqrt{3} & - 4 \sqrt{3}\end{matrix}]$ $\rightarrow$ $[\begin{matrix}1 & -1 & 7\end{matrix}]$
|
|
|
|
|
- $[\begin{matrix}\sqrt{3} & \sqrt{3} & - 4 \sqrt{3}\end{matrix}]$ $\rightarrow$ $[\begin{matrix}1 & 1 & 7\end{matrix}]$
|
|
|
|
|
- $[\begin{matrix}- \sqrt{3} & - \sqrt{3} & - 2 \sqrt{3}\end{matrix}]$ $\rightarrow$ $[\begin{matrix}-1 & -1 & 5\end{matrix}]$
|
|
|
|
|
- $[\begin{matrix}- \sqrt{3} & \sqrt{3} & - 2 \sqrt{3}\end{matrix}]$ $\rightarrow$ $[\begin{matrix}-1 & 1 & 5\end{matrix}]$
|
|
|
|
|
- $[\begin{matrix}\sqrt{3} & - \sqrt{3} & - 2 \sqrt{3}\end{matrix}]$ $\rightarrow$ $[\begin{matrix}1 & -1 & 5\end{matrix}]$
|
|
|
|
|
- $[\begin{matrix}\sqrt{3} & \sqrt{3} & - 2 \sqrt{3}\end{matrix}]$ $\rightarrow$ $[\begin{matrix}1 & 1 & 5\end{matrix}]$
|
2023-05-02 07:15:45 +00:00
|
|
|
|
|
|
|
|
|
ii) \c{(1 points) Sketch the result, being as accurate as possible and
|
|
|
|
|
labeling the colors of each of the visible faces.}
|
|
|
|
|
|
|
|
|
|
This is just a square with the dark gray side facing the camera. The other
|
|
|
|
|
sides are not visible because the cube is parallel to the axis, and when you
|
|
|
|
|
do an orthographic projection, those faces are lost.
|
|
|
|
|
|
2023-05-03 09:14:29 +00:00
|
|
|
|
![](6aii.jpg){width=40%}
|
|
|
|
|
|
2023-05-02 07:15:45 +00:00
|
|
|
|
iii) \c{(2 points) Show how you could achieve this transformation using one or
|
|
|
|
|
more matrix multiplication operations. Specify the matrix entries you would
|
|
|
|
|
use, and, if using multiple matrices, the order in which they would be
|
|
|
|
|
multiplied.}
|
|
|
|
|
|
|
|
|
|
Actually, I got the numbers above by using the three transformation matrices
|
|
|
|
|
in this Python script:
|
|
|
|
|
|
|
|
|
|
```py
|
2023-05-03 09:14:29 +00:00
|
|
|
|
def ortho_transform(left, right, bottom, top, near, far):
|
2023-05-02 07:15:45 +00:00
|
|
|
|
step_1 = np.array([
|
|
|
|
|
[1, 0, 0, 0],
|
|
|
|
|
[0, 1, 0, 0],
|
|
|
|
|
[0, 0, -1, 0],
|
|
|
|
|
[0, 0, 0, 1],
|
|
|
|
|
])
|
|
|
|
|
|
|
|
|
|
step_2 = np.array([
|
2023-05-03 09:14:29 +00:00
|
|
|
|
[1, 0, 0, -((left + right) / 2)],
|
|
|
|
|
[0, 1, 0, -((bottom + top) / 2)],
|
|
|
|
|
[0, 0, 1, -((near + far) / 2)],
|
2023-05-02 07:15:45 +00:00
|
|
|
|
[0, 0, 0, 1],
|
|
|
|
|
])
|
|
|
|
|
|
|
|
|
|
step_3 = np.array([
|
2023-05-03 09:14:29 +00:00
|
|
|
|
[(2 / (right - left)), 0, 0, 0],
|
|
|
|
|
[0, (2 / (top - bottom)), 0, 0],
|
|
|
|
|
[0, 0, (2 / (far - near)), 0],
|
2023-05-02 07:15:45 +00:00
|
|
|
|
[0, 0, 0, 1],
|
|
|
|
|
])
|
|
|
|
|
|
2023-05-03 09:14:29 +00:00
|
|
|
|
return step_3 @ step_2 @ step_1
|
2023-05-02 07:15:45 +00:00
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
b. \c{Show how you could project the vertices of this cube to the plane $z =
|
|
|
|
|
0$ using an oblique parallel projection in the direction $d = (1, 0,
|
|
|
|
|
\sqrt{3})$:}
|
|
|
|
|
|
|
|
|
|
i) \c{(3 points) Where will the six vertex locations be after such a
|
|
|
|
|
projection, omitting the normalization step?}
|
|
|
|
|
|
2023-05-03 09:14:29 +00:00
|
|
|
|
- $[\begin{matrix}- \sqrt{3} & - \sqrt{3} & - 4 \sqrt{3}\end{matrix}]$ $\rightarrow$ $[\begin{matrix}- \frac{4 \sqrt{3}}{3} - 1 & -1 & 7\end{matrix}]$
|
|
|
|
|
- $[\begin{matrix}- \sqrt{3} & \sqrt{3} & - 4 \sqrt{3}\end{matrix}]$ $\rightarrow$ $[\begin{matrix}- \frac{4 \sqrt{3}}{3} - 1 & 1 & 7\end{matrix}]$
|
|
|
|
|
- $[\begin{matrix}\sqrt{3} & - \sqrt{3} & - 4 \sqrt{3}\end{matrix}]$ $\rightarrow$ $[\begin{matrix}1 - \frac{4 \sqrt{3}}{3} & -1 & 7\end{matrix}]$
|
|
|
|
|
- $[\begin{matrix}\sqrt{3} & \sqrt{3} & - 4 \sqrt{3}\end{matrix}]$ $\rightarrow$ $[\begin{matrix}1 - \frac{4 \sqrt{3}}{3} & 1 & 7\end{matrix}]$
|
|
|
|
|
- $[\begin{matrix}- \sqrt{3} & - \sqrt{3} & - 2 \sqrt{3}\end{matrix}]$ $\rightarrow$ $[\begin{matrix}- \frac{2 \sqrt{3}}{3} - 1 & -1 & 5\end{matrix}]$
|
|
|
|
|
- $[\begin{matrix}- \sqrt{3} & \sqrt{3} & - 2 \sqrt{3}\end{matrix}]$ $\rightarrow$ $[\begin{matrix}- \frac{2 \sqrt{3}}{3} - 1 & 1 & 5\end{matrix}]$
|
|
|
|
|
- $[\begin{matrix}\sqrt{3} & - \sqrt{3} & - 2 \sqrt{3}\end{matrix}]$ $\rightarrow$ $[\begin{matrix}1 - \frac{2 \sqrt{3}}{3} & -1 & 5\end{matrix}]$
|
|
|
|
|
- $[\begin{matrix}\sqrt{3} & \sqrt{3} & - 2 \sqrt{3}\end{matrix}]$ $\rightarrow$ $[\begin{matrix}1 - \frac{2 \sqrt{3}}{3} & 1 & 5\end{matrix}]$
|
|
|
|
|
|
2023-05-02 07:15:45 +00:00
|
|
|
|
ii) \c{(2 points) Sketch the result, being as accurate as possible and
|
|
|
|
|
labeling the colors of each of the visible faces.}
|
|
|
|
|
|
2023-05-03 09:14:29 +00:00
|
|
|
|
![](6bii.jpg){width=40%}
|
|
|
|
|
|
|
|
|
|
Excuse the poor sketch but the point is the transformation exposes the green
|
|
|
|
|
face to the left by doing a shear first.
|
|
|
|
|
|
2023-05-02 07:15:45 +00:00
|
|
|
|
iii) \c{(4 points) Show how you could achieve this transformation using one
|
|
|
|
|
or more matrix multiplication operations. Specify the matrix entries you
|
|
|
|
|
would use, and, if using multiple matrices, the order in which they would be
|
|
|
|
|
multiplied.}
|
|
|
|
|
|
2023-05-03 09:14:29 +00:00
|
|
|
|
It's the same as the ortho, except it uses an extra matrix that is applied
|
|
|
|
|
before any of the other transformations:
|
|
|
|
|
|
|
|
|
|
```py
|
|
|
|
|
def oblique_transform(left, right, bottom, top, near, far):
|
|
|
|
|
step_0 = np.array([
|
|
|
|
|
[1, 0, (1 / sqrt3), 0],
|
|
|
|
|
[0, 1, 0, 0],
|
|
|
|
|
[0, 0, 1, 0],
|
|
|
|
|
[0, 0, 0, 1],
|
|
|
|
|
])
|
|
|
|
|
|
|
|
|
|
M_ortho = ortho_transform(left, right, bottom, top, near, far)
|
|
|
|
|
|
|
|
|
|
return M_ortho @ step_0
|
|
|
|
|
```
|
|
|
|
|
|
2023-05-02 07:15:45 +00:00
|
|
|
|
7. \c{Consider the simple scene shown in the image below, where two cubes, one
|
|
|
|
|
of height 1 and one of height 2, are both resting on a horizontal groundplane
|
|
|
|
|
($y = -\frac{1}{2}$), with the smaller cube’s front face aligned with $z =
|
|
|
|
|
-4$ and the larger cube’s front face aligned with $z = -7$.}
|
|
|
|
|
|
|
|
|
|
a. \c{(5 points) Let the camera location be (0, 0, 0), looking down the $-z$
|
|
|
|
|
axis, with the field of view set at $90^\circ$. Determine the points, in the
|
|
|
|
|
image plane, to which each of the cube vertices will be projected and sketch
|
|
|
|
|
the result to scale. Please clearly label the coordinates to avoid
|
|
|
|
|
ambiguity.}
|
|
|
|
|
|
|
|
|
|
For this part, I reimplemented the perspective rendering algorithm using
|
|
|
|
|
Python.
|
|
|
|
|
|
|
|
|
|
```py
|
|
|
|
|
def perspective_matrix(left, right, bottom, top, near, far):
|
|
|
|
|
return np.array([
|
|
|
|
|
[2.0 * near / (right - left), 0, (right + left) / (right - left), 0],
|
|
|
|
|
[0, 2.0 * near / (top - bottom), (top + bottom) / (top - bottom), 0],
|
|
|
|
|
[0, 0, -(far + near) / (far - near), -(2.0 * far * near) / (far - near)],
|
|
|
|
|
[0, 0, -1, 0],
|
|
|
|
|
])
|
|
|
|
|
|
|
|
|
|
def view_matrix(camera_pos, view_dir, up_dir):
|
|
|
|
|
n = unit(-view_dir)
|
|
|
|
|
u = unit(np.cross(up_dir, n))
|
|
|
|
|
v = np.cross(n, u)
|
|
|
|
|
return np.array([
|
|
|
|
|
[u[0], u[1], u[2], -np.dot(camera_pos, u)],
|
|
|
|
|
[v[0], v[1], v[2], -np.dot(camera_pos, v)],
|
|
|
|
|
[n[0], n[1], n[2], -np.dot(camera_pos, n)],
|
|
|
|
|
[0, 0, 0, 1],
|
|
|
|
|
])
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
The perspective and view matrices are:
|
|
|
|
|
|
|
|
|
|
$$
|
|
|
|
|
PV =
|
|
|
|
|
\begin{bmatrix}
|
|
|
|
|
1.0 & 0.0 & 0.0 & 0.0 \\
|
|
|
|
|
0.0 & 1.0 & 0.0 & 0.0 \\
|
|
|
|
|
0.0 & 0.0 & -1.2222 & -2.2222 \\
|
|
|
|
|
0.0 & 0.0 & -1.0 & 0.0 \\
|
|
|
|
|
\end{bmatrix}
|
|
|
|
|
\begin{bmatrix}
|
|
|
|
|
1.0 & 0.0 & 0.0 & -0.0 \\
|
|
|
|
|
0.0 & 1.0 & 0.0 & -0.0 \\
|
|
|
|
|
0.0 & 0.0 & 1.0 & -0.0 \\
|
|
|
|
|
0.0 & 0.0 & 0.0 & 1.0 \\
|
|
|
|
|
\end{bmatrix}
|
|
|
|
|
$$
|
|
|
|
|
|
|
|
|
|
Then I ran the transformation using the data given in this particular scene:
|
|
|
|
|
|
|
|
|
|
```py
|
|
|
|
|
def compute_view(near, vfov, hfov):
|
|
|
|
|
left = -math.tan(hfov / 2.0) * near
|
|
|
|
|
right = math.tan(hfov / 2.0) * near
|
|
|
|
|
bottom = -math.tan(vfov / 2.0) * near
|
|
|
|
|
top = math.tan(vfov / 2.0) * near
|
|
|
|
|
return left, right, bottom, top
|
|
|
|
|
|
|
|
|
|
def solve(camera_pos, angle):
|
|
|
|
|
angle_radians = math.radians(angle)
|
|
|
|
|
near = 1
|
|
|
|
|
far = 10
|
|
|
|
|
view_dir = np.array([0, 0, -1])
|
|
|
|
|
up_dir = np.array([0, 1, 0])
|
|
|
|
|
left, right, bottom, top = compute_view(near, angle_radians, angle_radians)
|
|
|
|
|
P = perspective_matrix(left, right, bottom, top, near, far)
|
|
|
|
|
V = view_matrix(camera_pos, view_dir, up_dir)
|
|
|
|
|
return P @ V
|
|
|
|
|
|
|
|
|
|
camera_pos = np.array([0, 0, 0])
|
|
|
|
|
angle = 90
|
|
|
|
|
m = np.around(solve(camera_pos, angle), 4)
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
This performed the transformation on the front face of the small cube:
|
|
|
|
|
|
|
|
|
|
- $[\begin{matrix}0.5 & 0.5 & -4.0\end{matrix}]$ $\rightarrow$ $[\begin{matrix}0.5 & 0.5 & 2.6666\end{matrix}]$
|
|
|
|
|
- $[\begin{matrix}0.5 & -0.5 & -4.0\end{matrix}]$ $\rightarrow$ $[\begin{matrix}0.5 & -0.5 & 2.6666\end{matrix}]$
|
|
|
|
|
- $[\begin{matrix}-0.5 & -0.5 & -4.0\end{matrix}]$ $\rightarrow$ $[\begin{matrix}-0.5 & -0.5 & 2.6666\end{matrix}]$
|
|
|
|
|
- $[\begin{matrix}-0.5 & 0.5 & -4.0\end{matrix}]$ $\rightarrow$ $[\begin{matrix}-0.5 & 0.5 & 2.6666\end{matrix}]$
|
|
|
|
|
|
|
|
|
|
and this transformation on the front face of the large cube:
|
|
|
|
|
|
|
|
|
|
- $[\begin{matrix}1.0 & 1.5 & -7.0\end{matrix}]$ $\rightarrow$ $[\begin{matrix}1.0 & 1.5 & 6.3332\end{matrix}]$
|
|
|
|
|
- $[\begin{matrix}1.0 & -0.5 & -7.0\end{matrix}]$ $\rightarrow$ $[\begin{matrix}1.0 & -0.5 & 6.3332\end{matrix}]$
|
|
|
|
|
- $[\begin{matrix}-1.0 & -0.5 & -7.0\end{matrix}]$ $\rightarrow$ $[\begin{matrix}-1.0 & -0.5 & 6.3332\end{matrix}]$
|
|
|
|
|
- $[\begin{matrix}-1.0 & 1.5 & -7.0\end{matrix}]$ $\rightarrow$ $[\begin{matrix}-1.0 & 1.5 & 6.3332\end{matrix}]$
|
|
|
|
|
|
|
|
|
|
Here's a render using Blender:
|
|
|
|
|
|
|
|
|
|
![](7a.jpg){width=40%}
|
|
|
|
|
|
|
|
|
|
b. \c{(4 points) How would the image change if the camera were moved forward by
|
|
|
|
|
2 units, leaving all of the other parameter settings the same? Determine the
|
|
|
|
|
points, in the image plane, to which each of the cube vertices would be
|
|
|
|
|
projected in this case and sketch the result to scale. Please clearly label
|
|
|
|
|
the coordinates to avoid ambiguity.}
|
2023-05-01 06:05:04 +00:00
|
|
|
|
|
2023-05-02 07:15:45 +00:00
|
|
|
|
Here is the updated Blender render:
|
2023-05-01 06:05:04 +00:00
|
|
|
|
|
2023-05-02 07:15:45 +00:00
|
|
|
|
![](7b.jpg){width=40%}
|
2023-05-01 06:05:04 +00:00
|
|
|
|
|
2023-05-02 07:15:45 +00:00
|
|
|
|
As you can see, the cubes now take up more of the frame, and in particular
|
|
|
|
|
the red cube has been warped to take up more camera width than the blue.
|
2023-05-01 06:05:04 +00:00
|
|
|
|
|
2023-05-02 07:15:45 +00:00
|
|
|
|
c. \c{(4 points) How would the image change if, instead of moving the camera,
|
|
|
|
|
the field of view were reduced by half, to $45^\circ$, leaving all of the
|
|
|
|
|
other parameter settings the same? Determine the points, in the image plane,
|
|
|
|
|
to which each of the cube vertices would be projected and sketch the result
|
|
|
|
|
to scale. Please clearly label the coordinates to avoid ambiguity.}
|
2023-05-01 06:05:04 +00:00
|
|
|
|
|
2023-05-02 07:15:45 +00:00
|
|
|
|
Here is the updated Blender render:
|
2023-05-01 06:05:04 +00:00
|
|
|
|
|
2023-05-02 07:15:45 +00:00
|
|
|
|
![](7c.jpg){width=40%}
|
2023-04-30 22:10:11 +00:00
|
|
|
|
|
2023-05-02 07:15:45 +00:00
|
|
|
|
Because of the reduced FOV, there is less of the scene shown so the cubes
|
|
|
|
|
take up more of the view. However, there is less of the perspective
|
|
|
|
|
foreshortening effect, so the front cube doesn't get warped into being wider
|
|
|
|
|
or bigger than the back cube.
|
|
|
|
|
|
|
|
|
|
d. (2 points)
|
|
|
|
|
|
|
|
|
|
- \c{Briefly describe what you notice.}
|
|
|
|
|
|
|
|
|
|
The cubes aren't warped except when they change distance from the eye.
|
|
|
|
|
|
|
|
|
|
- \c{When looking at two cube faces that are equal sizes in reality (e.g. front
|
|
|
|
|
and back) does one appear smaller than the other when one is more distant
|
|
|
|
|
from the camera than the other?}
|
|
|
|
|
|
|
|
|
|
Yes.
|
|
|
|
|
|
|
|
|
|
- \c{When looking at two objects that are resting on a common horizontal
|
|
|
|
|
groundplane, does the groundplane appear to be tiled in the image, so that
|
|
|
|
|
the objects that are farther away appear to be resting on a base that is
|
|
|
|
|
higher as their distance from the camera increases?}
|
|
|
|
|
|
|
|
|
|
Yes.
|
|
|
|
|
|
|
|
|
|
- \c{What changes do you observe in the relative heights, in the image, of the
|
|
|
|
|
smaller and larger cubes as the camera position changes?}
|
|
|
|
|
|
|
|
|
|
When the camera position is closer to the cubes, the front cube takes up
|
|
|
|
|
more space overall and so it takes up more height as well. But once the
|
|
|
|
|
camera is far, the big cube has a bigger relative height since their
|
|
|
|
|
heights aren't really warped from each other anymore.
|
|
|
|
|
|
|
|
|
|
- \c{Is there a point at which the camera could be so close to the smaller cube
|
|
|
|
|
(but not touching it) that the larger cube would be completely obscured in
|
|
|
|
|
the camera’s image?}
|
|
|
|
|
|
|
|
|
|
Yes. You can imagine that if the camera was a microscopically small
|
|
|
|
|
distance from the front cube (and the $near$ value was also small enough to
|
|
|
|
|
accommodate!), then the front cube would take up the entire image.
|
|
|
|
|
|
|
|
|
|
- \c{Based on these insights, what can you say about the idea to create an
|
|
|
|
|
illusion of "getting closer" to an object in a photographed scene by
|
|
|
|
|
zooming in on the image and cropping it so that the object looks bigger?}
|
|
|
|
|
|
|
|
|
|
It's not entirely accurate, because of perspective warp.
|
2023-04-30 22:10:11 +00:00
|
|
|
|
|
|
|
|
|
8. \c{Consider the perspective projection-normalization matrix $P$ which maps
|
|
|
|
|
the contents of the viewing frustum into a cube that extends from -1 to 1 in
|
|
|
|
|
$x, y, z$ (called normalized device coordinates).}
|
|
|
|
|
|
|
|
|
|
\c{Suppose you want to define a square, symmetric viewing frustum with a near
|
2023-04-30 21:01:21 +00:00
|
|
|
|
clipping plane located 0.5 units in front of the camera, a far clipping plane
|
2023-04-30 22:10:11 +00:00
|
|
|
|
located 20 units from the front of the camera, a $60^\circ$ vertical field of
|
|
|
|
|
view, and a $60^\circ$ horizontal field of view.}
|
|
|
|
|
|
|
|
|
|
a. \c{(2 points) What are the entries in $P$?}
|
2023-04-30 21:01:21 +00:00
|
|
|
|
|
2023-04-30 22:10:11 +00:00
|
|
|
|
The left / right values are found by using the tangent of the field-of-view
|
|
|
|
|
triangle: $\tan(60^\circ) = \frac{\textrm{right}}{0.5}$, so $\textrm{right} =
|
|
|
|
|
\tan(60^\circ) \times 0.5 = \boxed{\frac{\sqrt{3}}{2}}$. The same goes for the
|
|
|
|
|
vertical, which also yields $\frac{\sqrt{3}}{2}$.
|
2023-04-30 21:01:21 +00:00
|
|
|
|
|
2023-05-01 00:44:00 +00:00
|
|
|
|
$$
|
|
|
|
|
\begin{bmatrix}
|
2023-04-30 22:10:11 +00:00
|
|
|
|
\frac{2\times near}{right - left} & 0 & \frac{right + left}{right - left} & 0 \\
|
|
|
|
|
0 & \frac{2\times near}{top - bottom} & \frac{top + bottom}{top - bottom} & 0 \\
|
|
|
|
|
0 & 0 & -\frac{far + near}{far - near} & -\frac{2\times far\times near}{far - near} \\
|
|
|
|
|
0 & 0 & -1 & 0
|
2023-05-01 00:44:00 +00:00
|
|
|
|
\end{bmatrix}
|
|
|
|
|
$$
|
2023-04-30 21:01:21 +00:00
|
|
|
|
|
2023-05-01 00:44:00 +00:00
|
|
|
|
$$
|
|
|
|
|
= \begin{bmatrix}
|
2023-04-30 22:10:11 +00:00
|
|
|
|
\frac{2\times 0.5}{\frac{\sqrt{3}}{2} - (-\frac{\sqrt{3}}{2})} & 0 & \frac{\frac{\sqrt{3}}{2} + (-\frac{\sqrt{3}}{2})}{\frac{\sqrt{3}}{2} - (-\frac{\sqrt{3}}{2})} & 0 \\
|
|
|
|
|
0 & \frac{2\times 0.5}{\frac{\sqrt{3}}{2} - (-\frac{\sqrt{3}}{2})} & \frac{\frac{\sqrt{3}}{2} + (-\frac{\sqrt{3}}{2})}{\frac{\sqrt{3}}{2} - (-\frac{\sqrt{3}}{2})} & 0 \\
|
|
|
|
|
0 & 0 & -\frac{20 + 0.5}{20 - 0.5} & -\frac{2\times 20\times 0.5}{20 - 0.5} \\
|
|
|
|
|
0 & 0 & -1 & 0
|
2023-05-01 00:44:00 +00:00
|
|
|
|
\end{bmatrix}
|
|
|
|
|
$$
|
2023-04-30 22:10:11 +00:00
|
|
|
|
|
2023-05-01 00:44:00 +00:00
|
|
|
|
$$
|
|
|
|
|
= \boxed{\begin{bmatrix}
|
2023-04-30 22:10:11 +00:00
|
|
|
|
\frac{1}{\sqrt{3}} & 0 & 0 & 0 \\
|
|
|
|
|
0 & \frac{1}{\sqrt{3}} & 0 & 0 \\
|
|
|
|
|
0 & 0 & -\frac{41}{39} & -\frac{40}{39} \\
|
|
|
|
|
0 & 0 & -1 & 0
|
2023-05-01 00:44:00 +00:00
|
|
|
|
\end{bmatrix}}
|
|
|
|
|
$$
|
2023-04-30 22:10:11 +00:00
|
|
|
|
|
2023-05-01 06:05:04 +00:00
|
|
|
|
\todo the numbers are wrong lmao
|
|
|
|
|
|
2023-04-30 22:10:11 +00:00
|
|
|
|
b. \c{(3 points) How should be matrix $P$ be re-defined if the viewing window
|
|
|
|
|
is re-sized to be twice as tall as it is wide?}
|
|
|
|
|
|
|
|
|
|
c. \c{(3 points) What are the new horizontal and vertical fields of view
|
|
|
|
|
after this change has been made?}
|
2023-04-30 21:01:21 +00:00
|
|
|
|
|
2023-04-26 19:56:07 +00:00
|
|
|
|
## Clipping
|
|
|
|
|
|
|
|
|
|
9. \c{Consider the triangle whose vertex positions, after the viewport
|
|
|
|
|
transformation, lie in the centers of the pixels: $p_0 = (3, 3), p_1 = (9,
|
|
|
|
|
5), p_2 = (11, 11)$.}
|
|
|
|
|
|
|
|
|
|
Starting at $p_0$, the three vectors are:
|
|
|
|
|
|
|
|
|
|
- $v_0 = p_1 - p_0 = (9 - 3, 5 - 3) = (6, 2)$
|
|
|
|
|
- $v_1 = p_2 - p_1 = (11 - 9, 11 - 5) = (2, 6)$
|
|
|
|
|
- $v_2 = p_0 - p_2 = (3 - 11, 3 - 11) = (-8, -8)$
|
|
|
|
|
|
|
|
|
|
The first edge vector $e$ would be $(6, 2)$, and the edge normal would be
|
|
|
|
|
that rotated by $90^\circ$.
|
|
|
|
|
|
|
|
|
|
a. \c{(6 points) Define the edge equations and tests that would be applied,
|
|
|
|
|
during the rasterization process, to each pixel $(x, y)$ within the bounding
|
|
|
|
|
rectangle $3 \le x \le 11, 3 \le y \le 11$ to determine if that pixel is
|
|
|
|
|
inside the triangle or not.}
|
|
|
|
|
|
|
|
|
|
b. \c{(3 points) Consider the three pixels $p_4 = (6, 4), p_5 = (7, 7)$, and
|
|
|
|
|
$p_6 = (10, 8)$. Which of these would be considered to lie inside the
|
|
|
|
|
triangle, according to the methods taught in class?}
|
|
|
|
|
|
2023-04-30 21:01:21 +00:00
|
|
|
|
10. \c{When a model contains many triangles that form a smoothly curving surface
|
|
|
|
|
patch, it can be inefficient to separately represent each triangle in the
|
|
|
|
|
patch independently as a set of three vertices because memory is wasted when
|
|
|
|
|
the same vertex location has to be specified multiple times. A triangle
|
2023-04-30 21:03:33 +00:00
|
|
|
|
strip offers a memory-efficient method for representing connected 'strips'
|
2023-04-30 21:01:21 +00:00
|
|
|
|
of triangles. For example, in the diagram below, the six vertices v0 .. v5
|
|
|
|
|
define four adjacent triangles: (v0, v1, v2), (v2, v1, v3), (v2, v3, v4),
|
|
|
|
|
(v4, v3, v5). [Notice that the vertex order is switched in every other
|
|
|
|
|
triangle to maintain a consistent counter-clockwise orientation.] Ordinarily
|
|
|
|
|
one would need to pass 12 vertex locations to the GPU to represent this
|
|
|
|
|
surface patch (three vertices for each triangle), but when the patch is
|
|
|
|
|
encoded as a triangle strip, only the six vertices need to be sent and the
|
|
|
|
|
geometry they represent will be interpreted using the correspondence pattern
|
|
|
|
|
just described.}
|
|
|
|
|
|
|
|
|
|
\c{(5 points) When triangle strips are clipped, however, things
|
|
|
|
|
can get complicated. Consider the short triangle strip shown below in the
|
|
|
|
|
context of a clipping cube.}
|
|
|
|
|
|
|
|
|
|
- \c{After the six vertices v0 .. v5 are sent to be clipped, what will the
|
|
|
|
|
vertex list be after clipping process has finished?}
|
|
|
|
|
|
2023-05-02 07:15:45 +00:00
|
|
|
|
![](10a.jpg){width=40%}
|
|
|
|
|
|
2023-04-30 21:01:21 +00:00
|
|
|
|
- \c{How can this new result be expressed as a triangle strip? (Try to be as
|
|
|
|
|
efficient as possible)}
|
|
|
|
|
|
2023-05-02 07:15:45 +00:00
|
|
|
|
The only way to get this to be represented as a triangle strip is to
|
|
|
|
|
change around the some of the existing lines. Otherwise, the order of the
|
|
|
|
|
vertices prevents the exact same configuration from working.
|
|
|
|
|
|
|
|
|
|
See below for a working version (only consider the green lines, ignore the
|
|
|
|
|
red lines):
|
|
|
|
|
|
|
|
|
|
![](10b.jpg){width=40%}
|
|
|
|
|
|
2023-04-30 21:01:21 +00:00
|
|
|
|
- \c{How many triangles will be encoded in the clipped triangle strip?}
|
|
|
|
|
|
2023-05-02 07:15:45 +00:00
|
|
|
|
Based on the image above, 8 triangles will be used.
|
|
|
|
|
|
2023-04-30 21:01:21 +00:00
|
|
|
|
## Ray Tracing vs Scan Conversion
|
|
|
|
|
|
|
|
|
|
11. \c{(8 points) List the essential steps in the scan-conversion (raster
|
|
|
|
|
graphics) rendering pipeline, starting with vertex processing and ending
|
|
|
|
|
with the assignment of a color to a pixel in a displayed image. For each
|
|
|
|
|
step briefly describe, in your own words, what is accomplished and how. You
|
|
|
|
|
do not need to include steps that we did not discuss in class, such as
|
|
|
|
|
tessellation (subdividing an input triangle into multiple subtriangles),
|
|
|
|
|
instancing (creating new geometric primitives from existing input vertices),
|
|
|
|
|
but you should not omit any steps that are essential to the process of
|
|
|
|
|
generating an image of a provided list of triangles.}
|
|
|
|
|
|
2023-05-01 06:05:04 +00:00
|
|
|
|
The most essential steps in the scan-conversion process are:
|
|
|
|
|
|
|
|
|
|
- First, the input is given as a bunch of geometries. This includes
|
|
|
|
|
triangles and quads. Spheres aren't really part of the primitives, not
|
|
|
|
|
sure why historically but I can imagine it makes operations like
|
|
|
|
|
clipping monstrously more complex.
|
|
|
|
|
|
|
|
|
|
- Then, vertices of the geometries are passed to vertex shaders, which are
|
|
|
|
|
custom user-defined scripts that are run in parallel on graphics hardware
|
|
|
|
|
that are applied to each individual vertex. This includes things like
|
|
|
|
|
transforming it through coordinate systems (world coordinates vs. camera
|
|
|
|
|
coordinates vs. normalized coordinates) in order for the rasterizer
|
|
|
|
|
to be able to go through and process all of the geometries quickly.
|
|
|
|
|
|
|
|
|
|
- Clipping is done as an optimization to reduce the amount of things outside
|
|
|
|
|
the viewport that needs to be rendered. There are different clipping
|
|
|
|
|
algorithms available, but they all typically break down triangles into
|
|
|
|
|
smaller pieces that are all contained within the viewport. Culling is also
|
|
|
|
|
done to remove faces that aren't visible and thus don't need to be
|
|
|
|
|
rendered.
|
|
|
|
|
|
|
|
|
|
- The rasterizer then goes through and gives pixels colors based on which
|
|
|
|
|
geometries contain them. This includes determining which geometries are in
|
|
|
|
|
front of others, usually done using a z-buffer. When a z-buffer is used,
|
|
|
|
|
the color of the closest object is stored and picked. During this process,
|
|
|
|
|
fragment shaders also influence the output. Fragment shaders are also
|
|
|
|
|
custom user-defined programs running on graphics hardware, and can modify
|
|
|
|
|
things like what color is getting output or the order of the depth buffer
|
|
|
|
|
based on various input factors.
|
|
|
|
|
|
2023-04-30 21:01:21 +00:00
|
|
|
|
12. \c{(6 points) Compare and contrast the process of generating an image of a
|
|
|
|
|
scene using ray tracing versus scan conversion. Include a discussion of
|
|
|
|
|
outcomes that can be achieved using a ray tracing approach but not using a
|
|
|
|
|
scan-conversion approach, or vice versa, and explain the reasons why and why
|
|
|
|
|
not.}
|
|
|
|
|
|
|
|
|
|
With ray tracing, the process of generating pixels is very hierarchical.
|
|
|
|
|
The basic ray tracer was very simple, but the moment we even added shadows,
|
|
|
|
|
there were recursive rays that needed to be cast, not to mention the
|
|
|
|
|
jittering. None of those could be parallelized with the main one, because in
|
|
|
|
|
order to even figure out where to start, you need to have already performed
|
|
|
|
|
a lot of the calculations. (For my ray tracer implementation, I already
|
|
|
|
|
parallelized as much as I could using the work-stealing library `rayon`)
|
|
|
|
|
|
|
|
|
|
But with scan conversion, the majority of the transformations are just done
|
|
|
|
|
with matrix transformations over the geometries, which can be performed
|
|
|
|
|
completely in parallel with minimal branching (only depth testing is not
|
|
|
|
|
exactly) The rasterization process is also massively parallelizable. This
|
|
|
|
|
makes it faster to do on GPUs which are able to do a lot of independent
|
|
|
|
|
operations.
|