法线贴图与切线空间

法线贴图与切线空间

为了能正确使用法线贴图,我们需要在切线空间进行光照,本文记录切线的计算以及 TBN 矩阵的推导,并完善一些细节。最后分别在切线空间、世界空间、观察空间实现 ADS 光照(都实现一遍便于对各个空间有更清晰更深入的理解,并区分其优劣)。

法线贴图

法线贴图储存了法向量,如果将法向量分量限制在 $[-1, 1]$,就有
$$R = \frac{N_x + 1}{2}, G = \frac{N_y + 1}{2}, B = \frac{N_z + 1}{2}$$
法线贴图在制作时是相对于其本地坐标系的,一般 $x, y$ 轴分别对应 $u, v$,$z$ 轴垂直于 $x, y$ 轴。

切线空间

计算切线

如图,世界空间中有一个三角形 $\triangle P_0P_1P_2$,$P_i$ 分别表示其在世界空间中的坐标,左下角指出了世界坐标系,设三点对应的纹理坐标分别为 $(u_0, v_0), (u_1, v_1), (u_2, v_2)$,下面我们来求解其切线 $T$ 和副切线 $B$ 在世界坐标中的表示(均假设为单位向量)。
令 $\vec{e_1} = \overrightarrow{P_0P_1}, \vec{e_2} = \overrightarrow{P_0P_2}$,于是 $\vec{e_1}, \vec{e_2}$ 可以表示为 $T$ 和 $B$ 的线性组合,即

$$\vec{e_1} = (u_1 - u_0)T + (v_1 - v_0)B\\ \vec{e_2} = (u_2 - u_0)T + (v_2 - v_0)B$$

令 $\Delta U_i = u_i - u_0, \Delta V_i = v_i - v_0$,于是可以写成

$$\begin{bmatrix}e_{1_x} & e_{1_y} & e_{1_z} \\ e_{2_x} & e_{2_y} & e_{2_z} \end{bmatrix} = \begin{bmatrix}\Delta U_1 & \Delta V_1 \\ \Delta U_2 & \Delta V_2 \end{bmatrix}\begin{bmatrix}T_x & T_y & T_z \\ B_x & B_y & B_z\end{bmatrix}$$

于是就可以得到

$$\begin{aligned} \begin{bmatrix}T_x & T_y & T_z \\ B_x & B_y & B_z\end{bmatrix} &= \begin{bmatrix}\Delta U_1 & \Delta V_1 \\ \Delta U_2 & \Delta V_2 \end{bmatrix} ^ {-1} \begin{bmatrix}e_{1_x} & e_{1_y} & e_{1_z} \\ e_{2_x} & e_{2_y} & e_{2_z} \end{bmatrix}\\ & = \frac{1}{\Delta U_1 \Delta V_2 - \Delta U_2 \Delta V_1}\begin{bmatrix}\Delta V_2 & -\Delta V_1 \\ -\Delta U_2 & \Delta U_1\end{bmatrix}\begin{bmatrix}e_{1_x} & e_{1_y} & e_{1_z} \\ e_{2_x} & e_{2_y} & e_{2_z} \end{bmatrix} \end{aligned}$$

事实上,我们只需要求出切线就可以了。

TBN 矩阵

求出切线和副切线后,我们构造一个矩阵将物体从切线空间变换到世界空间,实际上这个变换就是一个坐标轴旋转,与视图矩阵的推导十分类似。
我们考虑切线空间中的基向量 $(1, 0, 0), (0, 1, 0), (0, 0, 1)$,其旋转后应分别对应 $T, B, N$,于是将基向量变换后的坐标依次写成列向量就得到了变换矩阵,即为

$$M = \begin{bmatrix}T_x & B_x & N_x \\ T_y & B_y & N_y \\ T_z & B_z & N_z\end{bmatrix}$$

而如果要从世界空间变换回切线空间,即为 $M^{-1}$,注意到旋转矩阵是正交矩阵,我们求其转置即可

$$M^{-1} = M ^ T = \begin{bmatrix}T_x & T_y & T_z \\ B_x & B_y & B_z \\ N_x & N_y & N_z \end{bmatrix}$$

使用法线贴图

一般来讲,我们现在有三种选择来实现光照:

  • 直观简单的办法:将法线从切线空间变换到世界空间,然后在世界空间中进行光照
  • 更方便光照的办法:将法线从切线空间变换到观察空间,然后在观察空间中实现光照,由于在观察空间中摄像机总是在远点看向 $-z$,这种方法比较便于实现光照
  • 通常性能更高的办法:由于光源的数量一般较少,我们可以将世界空间中的光源、摄像机变换到切线空间,然后在切线空间中实现光照

这里我们将用 OpenGL 实现下面这个砖墙的光照

brickwall
brickwall_normal (该贴图已上下颠倒)

一些准备

Shader

编写一个 Shader 类方便管理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
class Shader {
public:
Shader() = default;
Shader(const std::string &vsPath, const std::string &fsPath) {
init(vsPath, fsPath);
}
void init(const std::string &, const std::string &);
void initWithCode(const std::string &, const std::string &);
static std::string getCodeFromFile(const std::string &);
void use() const;
GLint get(const std::string &) const;
void setInt(const std::string &, GLint) const;
void setFloat(const std::string &, GLfloat) const;
void setMat4(const std::string &, const glm::mat4 &) const;
void setVec3(const std::string &name, const glm::vec3 &value) const;

private:
GLuint id = 0;
};

定义顶点、三角形、四边形并实现切线计算

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
struct Vertex {
glm::vec3 pos;
glm::vec2 uv;
glm::vec3 normal;
glm::vec3 tangent;
};

struct Triangle {
Vertex a, b, c;

void calcTangents() {
glm::vec3 edge1 = b.pos - a.pos;
glm::vec3 edge2 = c.pos - a.pos;
float deltaU1 = b.uv.s - a.uv.s;
float deltaV1 = b.uv.t - a.uv.t;
float deltaU2 = c.uv.s - a.uv.s;
float deltaV2 = c.uv.t - a.uv.t;
float det = 1.0f / (deltaU1 * deltaV2 - deltaU2 * deltaV1);
glm::vec3 tangent(det * (deltaV2 * edge1.x - deltaV1 * edge2.x),
det * (deltaV2 * edge1.y - deltaV1 * edge2.y),
det * (deltaV2 * edge1.z - deltaV1 * edge2.z));
// no need for bitangent, which is calculated in the shader
tangent = glm::normalize(tangent);
a.tangent = b.tangent = c.tangent = tangent;
}
};

struct Quad {
Triangle first;
Triangle second;

void calcTangents() {
first.calcTangents();
second.calcTangents();
}
};

配置四边形

我们将这个砖墙绑在一个四边形上(具体坐标如代码),并定义其纹理坐标,法线设置为 $(0, 0, 1)$,然后将坐标,纹理坐标,法线,切线分别绑定至顶点属性 $0, 1, 2, 3$ 位置。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
void setupQuad() {
glm::vec3 pos[4]{{-1.0f, 1.0f, 0.0f},
{-1.0f, -1.0f, 0.0f},
{1.0f, -1.0f, 0.0f},
{1.0f, 1.0f, 0.0f}};
glm::vec2 uv[4]{{0.0f, 1.0f}, {0.0f, 0.0f}, {1.0f, 0.0f}, {1.0f, 1.0f}};
glm::vec3 normal(0.0f, 0.0f, 1.0f);
Quad quad{{{pos[0], uv[0], normal, {}},
{pos[1], uv[1], normal, {}},
{pos[2], uv[2], normal, {}}},
{{pos[0], uv[0], normal, {}},
{pos[2], uv[2], normal, {}},
{pos[3], uv[3], normal, {}}}};
quad.calcTangents();
glBufferData(GL_ARRAY_BUFFER, sizeof(Quad), &quad, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex),
reinterpret_cast<void *>(offsetof(Vertex, pos)));
glEnableVertexAttribArray(0);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex),
reinterpret_cast<void *>(offsetof(Vertex, uv)));
glEnableVertexAttribArray(1);
glVertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex),
reinterpret_cast<void *>(offsetof(Vertex, normal)));
glEnableVertexAttribArray(2);
glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex),
reinterpret_cast<void *>(offsetof(Vertex, tangent)));
glEnableVertexAttribArray(3);
}

传入着色器

将光源,摄像机,各种变换矩阵传入着色器

1
2
3
4
5
6
7
8
9
10
11
12
shader->use();
shader->setInt("normalMapping", normalMapping);
glm::mat4 model(1.0f);
model = glm::rotate(model, rad, glm::vec3(1.0f, 1.0f, 1.0f));
glm::mat4 view(1.0f);
view = glm::translate(view, -cameraPos);
glm::mat4 proj = glm::perspective(glm::radians(45.0f), ASPECT, 0.1f, 100.0f);
shader->setMat4("model", model);
shader->setMat4("view", view);
shader->setMat4("proj", proj);
shader->setVec3("lightPos", lightPos);
shader->setVec3("viewPos", cameraPos);

切线空间 $\rightarrow$ 世界空间

顶点着色器与通常一致,将顶点位置变换到世界坐标,然后只用算出法线变换矩阵,将法线、切线变换也至世界坐标即可。

t2w.vs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#version 430 core

layout (location = 0) in vec3 pos;
layout (location = 1) in vec2 texCoords;
layout (location = 2) in vec3 normal;
layout (location = 3) in vec3 tangent;

uniform mat4 model, view, proj;

out vec3 varyingPos;
out vec2 varyingTexCoords;
out vec3 varyingNormal;
out vec3 varyingTangent;

void main() {
gl_Position = proj * view * model * vec4(pos, 1.0);
varyingPos = vec3(model * vec4(pos, 1.0));
varyingTexCoords = texCoords;
mat4 normMat = transpose(inverse(model));
varyingNormal = vec3(normMat * vec4(normal, 1.0));
varyingTangent = vec3(model * vec4(tangent, 1.0));
}

然后对于片元着色器,先构造出 TBN 矩阵,需要注意的是计算时切线需要先做一次正交化以保证垂直,即
$$T’ = T - (T \cdot N)N$$
然后归一化,那么 $B$ 就可以由 $T$ 和 $N$ 叉积得出,即 $B = N \times T$(注意方向,这里用的是右手系)。这个操作与构建摄像机的三轴时十分类似,在构建摄像机时,我们一般指定了摄像机位置,看向的位置,然后给了一个上轴,该上轴通常是与视线(前轴)不正交的,我们先将两者叉积得出右轴,然后右轴与视线(前轴)叉积得出真正的上轴。
然后读取法线贴图,还原法线信息,将法线左乘 TBN 矩阵变换至世界空间。
接下来正常实现 ADS 光照即可。

t2w.fs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
#version 430 core
layout (binding = 0) uniform sampler2D diffuseMap;
layout (binding = 1) uniform sampler2D normalMap;
uniform int normalMapping;
uniform vec3 lightPos, viewPos;
in vec3 varyingPos;
in vec2 varyingTexCoords;
in vec3 varyingNormal;
in vec3 varyingTangent;
out vec4 fragColor;
vec3 calcNormal();

void main() {
vec3 N = calcNormal();
vec3 V = normalize(viewPos - varyingPos);
vec3 L = normalize(lightPos - varyingPos);
vec3 H = normalize(L + V);
vec3 color = texture(diffuseMap, varyingTexCoords).rgb;
vec3 ambient = 0.1 * color;
vec3 diffuse = max(dot(L, N), 0.0) * color;
vec3 specular = vec3(0.2) * pow(max(dot(N, H), 0.0), 32.0);
fragColor = vec4(ambient + diffuse + specular, 1.0);
}

vec3 calcNormal() {
vec3 normal = normalize(varyingNormal);
if (normalMapping == 1) {
vec3 tangent = normalize(varyingTangent);
tangent = normalize(tangent - dot(tangent, normal) * normal);
vec3 bitangent = cross(normal, tangent);
mat3 tbn = mat3(tangent, bitangent, normal);
normal = texture(normalMap, varyingTexCoords).rgb;
normal = normalize(normal * 2.0 - 1.0);
normal = normalize(tbn * normal);
}
return normal;
}

切线空间 $\rightarrow$ 观察空间

在观察空间实现光照的好处就是观察点始终在原点看向 $-z$,具体实现与变换到世界空间十分类似。
顶点着色器同样只用将顶点位置变换到观察空间,然后用算出法线变换矩阵,将法线、切线变换也至观察空间即可。
一个细节:注意到我们在推导法线矩阵时,只假设了变换矩阵为 $M$,故令 $M = \text{view} * \text{model}$,那么就得到了对应的法线变换矩阵 $G = (M^{-1})^T$。
另一个细节:记得需要将光源变换到观察空间,也可以直接在顶点着色器里求出观察空间中对应的光线方向,一般来讲,为了提升性能,针对在观察空间中实现光照的方法,光源在设置时会在 CPU 里计算其在观察空间的位置,即先左乘 $\text{view}$ 矩阵然后通过 uniform 传入,同样观察、模型矩阵的结合 $MV$ 矩阵和法线变换矩阵也应提前计算,通过 uniform 传入。这里为了方便没有这么做。

t2v.vs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#version 430 core

layout (location = 0) in vec3 pos;
layout (location = 1) in vec2 texCoords;
layout (location = 2) in vec3 normal;
layout (location = 3) in vec3 tangent;

uniform mat4 model, view, proj;
uniform vec3 lightPos;

out vec3 varyingPos;
out vec2 varyingTexCoords;
out vec3 varyingNormal;
out vec3 varyingTangent;
out vec3 varyingLightDir;

void main() {
mat4 mvMat = view * model;
gl_Position = proj * mvMat * vec4(pos, 1.0);
varyingPos = vec3(mvMat * vec4(pos, 1.0));
varyingTexCoords = texCoords;
mat4 normMat = transpose(inverse(mvMat));
varyingNormal = vec3(normMat * vec4(normal, 1.0));
varyingTangent = normalize(tangent - dot(tangent, normal) * normal);
varyingTangent = vec3(normMat * vec4(varyingTangent, 1.0));
varyingLightDir = vec3(view * vec4(lightPos, 1.0)) - varyingPos;
}

对于片元着色器就十分简单了,首先和前一部分切线空间$\rightarrow$世界空间中的做法一样计算出法线,然后正常实现 ADS 光照即可。

t2v.fs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
#version 430 core
layout (binding = 0) uniform sampler2D diffuseMap;
layout (binding = 1) uniform sampler2D normalMap;
uniform int normalMapping;
in vec3 varyingPos;
in vec3 varyingLightDir;
in vec2 varyingTexCoords;
in vec3 varyingNormal;
in vec3 varyingTangent;
out vec4 fragColor;
vec3 calcNormal();

void main() {
vec3 N = calcNormal();
vec3 V = normalize(-varyingPos);
vec3 L = normalize(varyingLightDir);
vec3 H = normalize(L + V);
vec3 color = texture(diffuseMap, varyingTexCoords).rgb;
vec3 ambient = 0.1 * color;
vec3 diffuse = max(dot(L, N), 0.0) * color;
vec3 specular = vec3(0.2) * pow(max(dot(N, H), 0.0), 32.0);
fragColor = vec4(ambient + diffuse + specular, 1.0);
}

vec3 calcNormal() {
vec3 normal = normalize(varyingNormal);
if (normalMapping == 1) {
vec3 tangent = normalize(varyingTangent);
tangent = normalize(tangent - dot(tangent, normal) * normal);
vec3 bitangent = cross(normal, tangent);
mat3 tbn = mat3(tangent, bitangent, normal);
normal = texture(normalMap, varyingTexCoords).rgb;
normal = normalize(normal * 2.0 - 1.0);
normal = normalize(tbn * normal);
}
return normal;
}

世界空间 $\rightarrow$ 切线空间

这是实现法线贴图最常用的做法,因为光源数量通常较少,在顶点着色器中计算 TBN 矩阵做变换比在片元着色器中性能更高。
在顶点着色器中先计算出顶点的世界坐标,然后计算法线变换矩阵,将法线和切线变换到世界空间,接下来同样做正交化得出 $T, B, N$,然后构造 TBN 矩阵的逆矩阵(求转置),将顶点从世界空间变换到切线空间,并将光源和摄像机也变换到切线空间。

w2t.vs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
#version 430 core

layout (location = 0) in vec3 pos;
layout (location = 1) in vec2 texCoords;
layout (location = 2) in vec3 normal;
layout (location = 3) in vec3 tangent;

uniform mat4 model, view, proj;
uniform vec3 lightPos, viewPos;

out vec3 varyingPos;
out vec2 varyingTexCoords;
out vec3 varyingLightPos;
out vec3 varyingViewPos;

void main() {
gl_Position = proj * view * model * vec4(pos, 1.0);
mat4 normMat = transpose(inverse(model));
vec3 N = normalize(vec3(normMat * vec4(normal, 1.0)));
vec3 T = normalize(vec3(normMat * vec4(tangent, 1.0)));
T = normalize(T - dot(T, N) * N);
vec3 B = cross(N, T);
mat3 tbnInv = transpose(mat3(T, B, N));
vec3 worldPos = vec3(model * vec4(pos, 1.0));
varyingPos = tbnInv * worldPos;
varyingTexCoords = texCoords;
varyingLightPos = tbnInv * lightPos;
varyingViewPos = tbnInv * viewPos;
}

由于在切线空间中实现光照,片元着色器就很简单了,直接从法线贴图中还原出法线信息,然后正常实现 ADS 光照即可。

w2t.fs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
#version 430 core

layout (binding = 0) uniform sampler2D diffuseMap;
layout (binding = 1) uniform sampler2D normalMap;
uniform int normalMapping;

in vec3 varyingPos;
in vec2 varyingTexCoords;
in vec3 varyingLightPos;
in vec3 varyingViewPos;
out vec4 fragColor;
vec3 calcNormal();

void main() {
vec3 N = calcNormal();
vec3 V = normalize(varyingViewPos - varyingPos);
vec3 L = normalize(varyingLightPos - varyingPos);
vec3 H = normalize(L + V);
vec3 color = texture(diffuseMap, varyingTexCoords).rgb;
vec3 ambient = 0.1 * color;
vec3 diffuse = max(dot(L, N), 0.0) * color;
vec3 specular = vec3(0.2) * pow(max(dot(N, H), 0.0), 32.0);
fragColor = vec4(ambient + diffuse + specular, 1.0);
}

vec3 calcNormal() {
vec3 normal = vec3(0.0, 0.0, 1.0);
if (normalMapping == 1) {
normal = texture(normalMap, varyingTexCoords).rgb;
normal = normalize(normal * 2.0 - 1.0);
}
return normal;
}

完整代码

由于是简单测试,这里就没怎么封装,然后全写一个文件里了,着色器均以在上文给出,可以通过按下 Q, W, E 切换(但理论上这几个着色器实现的效果是一样的),按下 J, K 可以启用/禁用旋转,N, M 以切换是否使用法线贴图。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
#include <glad/glad.h>
#include <GLFW/glfw3.h>
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtc/type_ptr.hpp>
#define STB_IMAGE_IMPLEMENTATION
#include <stb_image.h>

#include <iostream>
#include <string>
#include <sstream>
#include <fstream>

class Shader {
public:
Shader() = default;
Shader(const std::string &vsPath, const std::string &fsPath) {
init(vsPath, fsPath);
}
void init(const std::string &, const std::string &);
void initWithCode(const std::string &, const std::string &);
static std::string getCodeFromFile(const std::string &);
void use() const;
GLint get(const std::string &) const;
void setInt(const std::string &, GLint) const;
void setFloat(const std::string &, GLfloat) const;
void setMat4(const std::string &, const glm::mat4 &) const;
void setVec3(const std::string &name, const glm::vec3 &value) const;

private:
GLuint id = 0;
};

GLuint loadTexture(const std::string &);
void processInput(GLFWwindow *, Shader *&);
void setupQuad();
void render(float, Shader *);

const int SRC_WIDTH = 800 * 2;
const int SRC_HEIHGT = 600 * 2;
const float ASPECT = static_cast<float>(SRC_WIDTH) / SRC_HEIHGT;

struct Vertex {
glm::vec3 pos;
glm::vec2 uv;
glm::vec3 normal;
glm::vec3 tangent;
};

struct Triangle {
Vertex a, b, c;

void calcTangents() {
glm::vec3 edge1 = b.pos - a.pos;
glm::vec3 edge2 = c.pos - a.pos;
float deltaU1 = b.uv.s - a.uv.s;
float deltaV1 = b.uv.t - a.uv.t;
float deltaU2 = c.uv.s - a.uv.s;
float deltaV2 = c.uv.t - a.uv.t;
float det = 1.0f / (deltaU1 * deltaV2 - deltaU2 * deltaV1);
glm::vec3 tangent(det * (deltaV2 * edge1.x - deltaV1 * edge2.x),
det * (deltaV2 * edge1.y - deltaV1 * edge2.y),
det * (deltaV2 * edge1.z - deltaV1 * edge2.z));
// no need for bitangent, which is calculated in the shader
tangent = glm::normalize(tangent);
a.tangent = b.tangent = c.tangent = tangent;
}
};

struct Quad {
Triangle first;
Triangle second;

void calcTangents() {
first.calcTangents();
second.calcTangents();
}
};

glm::vec3 lightPos(0.5f, 1.0f, 0.3f);
glm::vec3 cameraPos(0.0f, 0.0f, 3.0f);
Shader t2wShader, t2vShader, w2tShader;
int normalMapping = 1;
bool rotating = false;
GLuint diffuseMap, normalMap;

int main() {
glfwInit();
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);

GLFWwindow *window =
glfwCreateWindow(SRC_WIDTH, SRC_HEIHGT, "normal", nullptr, nullptr);
if (!window) {
std::cerr << "failed to create window" << std::endl;
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);

glfwSetInputMode(window, GLFW_CURSOR, GLFW_CURSOR_DISABLED);

if (!gladLoadGL()) {
std::cerr << "failed to load glad" << std::endl;
return -1;
}
glViewport(0, 0, SRC_WIDTH, SRC_HEIHGT);
glEnable(GL_DEPTH_TEST);
t2wShader.init("t2w.vs", "t2w.fs");
t2vShader.init("t2v.vs", "t2v.fs");
w2tShader.init("w2t.vs", "w2t.fs");
Shader *shader = &t2wShader;
diffuseMap = loadTexture("brickwall.jpg");
normalMap = loadTexture("brickwall_normal.jpg");

GLuint vao, vbo;
glGenVertexArrays(1, &vao);
glGenBuffers(1, &vbo);
glBindVertexArray(vao);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
setupQuad();

while (!glfwWindowShouldClose(window)) {
processInput(window, shader);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

render(glfwGetTime(), shader);

glfwSwapBuffers(window);
glfwPollEvents();
}
glfwTerminate();
}

void setupQuad() {
glm::vec3 pos[4]{{-1.0f, 1.0f, 0.0f},
{-1.0f, -1.0f, 0.0f},
{1.0f, -1.0f, 0.0f},
{1.0f, 1.0f, 0.0f}};
glm::vec2 uv[4]{{0.0f, 1.0f}, {0.0f, 0.0f}, {1.0f, 0.0f}, {1.0f, 1.0f}};
glm::vec3 normal(0.0f, 0.0f, 1.0f);
Quad quad{{{pos[0], uv[0], normal, {}},
{pos[1], uv[1], normal, {}},
{pos[2], uv[2], normal, {}}},
{{pos[0], uv[0], normal, {}},
{pos[2], uv[2], normal, {}},
{pos[3], uv[3], normal, {}}}};
quad.calcTangents();
glBufferData(GL_ARRAY_BUFFER, sizeof(Quad), &quad, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex),
reinterpret_cast<void *>(offsetof(Vertex, pos)));
glEnableVertexAttribArray(0);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex),
reinterpret_cast<void *>(offsetof(Vertex, uv)));
glEnableVertexAttribArray(1);
glVertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex),
reinterpret_cast<void *>(offsetof(Vertex, normal)));
glEnableVertexAttribArray(2);
glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex),
reinterpret_cast<void *>(offsetof(Vertex, tangent)));
glEnableVertexAttribArray(3);
}

void processInput(GLFWwindow *window, Shader *&shader) {
if (glfwGetKey(window, GLFW_KEY_ESCAPE))
glfwSetWindowShouldClose(window, true);
if (glfwGetKey(window, GLFW_KEY_Q)) shader = &t2wShader;
if (glfwGetKey(window, GLFW_KEY_W)) shader = &t2vShader;
if (glfwGetKey(window, GLFW_KEY_E)) shader = &w2tShader;
if (glfwGetKey(window, GLFW_KEY_N)) normalMapping = 1;
if (glfwGetKey(window, GLFW_KEY_M)) normalMapping = 0;
if (glfwGetKey(window, GLFW_KEY_J)) rotating = true;
if (glfwGetKey(window, GLFW_KEY_K)) rotating = false;
}

void render(float currentTime, Shader *shader) {
static float rad = 0.0f, step = 0.001f;
if (rotating) {
if (rad > 0.75f) step = -step;
if (rad < -0.75f) step = -step;
rad += step;
} else {
rad = 0.0f;
}
shader->use();
shader->setInt("normalMapping", normalMapping);
glm::mat4 model(1.0f);
model = glm::rotate(model, rad, glm::vec3(1.0f, 1.0f, 1.0f));
glm::mat4 view(1.0f);
view = glm::translate(view, -cameraPos);
glm::mat4 proj = glm::perspective(glm::radians(45.0f), ASPECT, 0.1f, 100.0f);
shader->setMat4("model", model);
shader->setMat4("view", view);
shader->setMat4("proj", proj);
shader->setVec3("lightPos", lightPos);
shader->setVec3("viewPos", cameraPos);

glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, diffuseMap);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, normalMap);
glDrawArrays(GL_TRIANGLES, 0, 6);
}

GLuint loadTexture(const std::string &path) {
GLuint id;
glGenTextures(1, &id);
int width, height, nrComponents;
stbi_uc *data = stbi_load(path.c_str(), &width, &height, &nrComponents, 0);
if (data) {
GLenum format;
if (nrComponents == 1)
format = GL_RED;
else if (nrComponents == 3)
format = GL_RGB;
else if (nrComponents == 4)
format = GL_RGBA;
glBindTexture(GL_TEXTURE_2D, id);
glTexImage2D(GL_TEXTURE_2D, 0, format, width, height, 0, format,
GL_UNSIGNED_BYTE, data);
glGenerateMipmap(GL_TEXTURE_2D);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,
GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

stbi_image_free(data);
} else {
std::cerr << "failed to load texture " << path << std::endl;
stbi_image_free(data);
}
return id;
}

void Shader::init(const std::string &vs, const std::string &fs) {
initWithCode(getCodeFromFile(vs), getCodeFromFile(fs));
}

void Shader::initWithCode(const std::string &vs, const std::string &fs) {
GLuint vertexShader, fragmentShader;
vertexShader = glCreateShader(GL_VERTEX_SHADER);
const GLchar *vsCode = vs.c_str();
glShaderSource(vertexShader, 1, &vsCode, nullptr);
glCompileShader(vertexShader);
int success;
char infoLog[1024];
glGetShaderiv(vertexShader, GL_COMPILE_STATUS, &success);
if (!success) {
glGetShaderInfoLog(vertexShader, sizeof(infoLog), nullptr, infoLog);
std::cerr << infoLog << std::endl;
}

fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
const GLchar *fsCode = fs.c_str();
glShaderSource(fragmentShader, 1, &fsCode, nullptr);
glCompileShader(fragmentShader);
glGetShaderiv(fragmentShader, GL_COMPILE_STATUS, &success);
if (!success) {
glGetShaderInfoLog(fragmentShader, sizeof(infoLog), nullptr, infoLog);
std::cerr << infoLog << std::endl;
}
id = glCreateProgram();
glAttachShader(id, vertexShader);
glAttachShader(id, fragmentShader);
glLinkProgram(id);
glGetProgramiv(id, GL_LINK_STATUS, &success);
if (!success) {
glGetProgramInfoLog(id, sizeof(infoLog), nullptr, infoLog);
std::cerr << infoLog << std::endl;
}
glDeleteShader(vertexShader);
glDeleteShader(fragmentShader);
}

std::string Shader::getCodeFromFile(const std::string &path) {
std::string code;
std::ifstream file;
file.exceptions(std::ifstream::failbit | std::ifstream::badbit);
try {
file.open(path);
std::stringstream stream;
stream << file.rdbuf();
file.close();
code = stream.str();
} catch (std::ifstream::failure &e) {
std::cerr << "File Error" << std::endl << e.what() << std::endl;
}
return code;
}

void Shader::use() const { glUseProgram(id); }
GLint Shader::get(const std::string &name) const {
return glGetUniformLocation(id, name.c_str());
}
void Shader::setInt(const std::string &name, GLint value) const {
glUniform1i(get(name), value);
}

void Shader::setFloat(const std::string &name, GLfloat value) const {
glUniform1f(get(name), value);
}

void Shader::setMat4(const std::string &name, const glm::mat4 &value) const {
glUniformMatrix4fv(get(name), 1, GL_FALSE, glm::value_ptr(value));
}

void Shader::setVec3(const std::string &name, const glm::vec3 &value) const {
glUniform3fv(get(name), 1, glm::value_ptr(value));
}

参考

# ,

Comments

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×