/
atom.xml
475 lines (282 loc) · 339 KB
/
atom.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<title>Tianliang</title>
<link href="/atom.xml" rel="self"/>
<link href="https://www.starlg.cn/"/>
<updated>2022-05-30T14:16:36.000Z</updated>
<id>https://www.starlg.cn/</id>
<author>
<name>Tianliang Zhang</name>
</author>
<generator uri="http://hexo.io/">Hexo</generator>
<entry>
<title>K-Net:Towards Unified Image Segmentation</title>
<link href="https://www.starlg.cn/2022/05/20/K-Net/"/>
<id>https://www.starlg.cn/2022/05/20/K-Net/</id>
<published>2022-05-20T02:32:23.000Z</published>
<updated>2022-05-30T14:16:36.000Z</updated>
<content type="html"><![CDATA[<p>Paper: [NeurIPS 2021] K-Net: Towards Unified Image Segmentation</p><p>Arxiv: https://arxiv.org/abs/2106.14855</p><p>Github: https://github.com/ZwwWayne/K-Net/</p><h2 id="介绍">介绍</h2><p>语义、实例和全景分割之间尽管存在潜在联系,但是它们使用不同的和特定的框架来解决各自任务。这个工作为这些任务提供了一个统一、简单且有效的框架,即 K-Net。它通过一组可学习的 kernels 来分割实例和语义类别,其中每个 kernel 负责为潜在实例或 stuff 类别生成 mask。为了解决区分不同实例的困难,论文提出一种 kernel update 策略,改策略使每个 kernel 能够动态并以输入图像中意义组为条件。K-Net 可以通过二分匹配进行端到端的训练,其中训练和推理是不需要 NMS 和 矩形框的。</p><img src="/2022/05/20/K-Net/NeurIPS2021_K-Net_Figure1.png" title="Figure 1. 语义分割(a)、实例分割(b)、全景分割(c)任务在本论文中由一个通用框架统一起来"><p>在传统的语义分割中,每个 convolutinal kernel 对应一个语义类。我们的框架扩展了这个概念,是每个 kernel 对应一个潜在的实例或者一个语义类。</p><p>在本文中,我们首次尝试制定一个统一且有效的框架,通过 kernels 的概念来连接看似不同的图像分割任务(语义、实例和全景)。我们的方法被称为 K-Net(“K”代表内核)。它从一组随机初始化的卷积核开始,并根据现有的分割目标学习 kernels,即用于语义类别的 semantic kernels 和用于实例身份的 instance kernels(图1b))。semantic kernels 和 instance kernels 的简单组合可以自然地进行全景分割(图1c)。在前向传递中,kernels 对图像特征进行卷积以获得相应的分割预测。</p><p>K-Net 的多功能性和简单性是通过两种设计实现的。首先,我们制定了 K-Net,以便它动态更新 kernels,使它们以它们在图像上的激活为条件。这种内容感知(content-aware)机制对于确保每个 kernel(尤其是 instance kernel)准确响应图像中的不同对象至关重要。通过迭代应用这种自适应 kernel 更新策略,K-Net 显着提高了 kernels 的判别能力并提升了最终的分割性能。值得注意的是,这种策略普遍适用于所有分割任务的 kernels 。</p><p>其次,受目标检测 DETR 最新进展的启发,我们采用二分匹配策略为每个内核分配学习目标。这种训练方法有利于传统的训练策略,因为它在图像中的 kernels 和实例之间建立了一对一的映射。因此,它解决了处理图像中不同数量的实例的问题。此外,它是纯 mask 驱动的,不涉及 boxes。因此,K-Net 自然是无 NMS 和无框的,这对实时应用很有吸引力。</p><h2 id="方法">方法</h2><h3 id="k-net">K-Net</h3><p>尽管“有意义的组”有不同的定义,但所有分割任务本质都将每个像素分配给一个预定义的有意义的组。由于通常假设图片中的组数是有限的,因此我们可以将分割任务的最大组设置为 N。例如,有 N 个预定义的语义类用于语义分割,或者图像中最多有 N 个目标用于实例分割。对于全景分割,N 是图像中 stuff 类和 objects 的总数。因此,我们可以使用 N 个内核将图像划分为 N 个组,其中每个 kernel 负责找到属于期对应组的像素。具体来说,给定由深度神经网络生成的 B 副图像的输入特征图 <span class="math inline">\(F \in R^{B \times C \times H \times W}\)</span>,我们只需要 N 个内核 <span class="math inline">\(K \in R^{N \times C}\)</span> 与 <span class="math inline">\(F\)</span> 进行卷积即可获得相应的分割预测 <span class="math inline">\(M \in R^{B \times N \times H \times W}\)</span> 为</p><p><span class="math display">\[M = \sigma (K \ast F),\]</span></p><p>其中 C,H 和 W 分别是特征图的通道数、高度和宽度。如果我们只想将每个像素分配给一个 kernel(通常用于语义分割),则激活函数 <span class="math inline">\(\sigma\)</span> 可以是 softmax 函数。如果我们允许一个像素属于多个 mask,则 Sigmoid 函数也可以用作激活函数,通过在激活图上设置一个阈值(如 0.5)(通常用于实例分割),这会产生 N 个二进制 masks。</p><p>这个公式已经主导了语义分割多年。在语义分割中,每个 kernel 负责在图像中找到相似类别的所有像素。而在实例分割中,每个像素组对应一个对象。然而,以前的方法通过额外的步骤而不是 kernel 来分离实例。</p><p>本文是第一个探讨语义分割中的 kernel 概念是否同样适用于实例分割,以及更普遍的全景分割的研究。为了通过内核分离实例,K-Net 中的每个内核最多只能分割图像中的一个对象(图 1b)。通过这种方式,K-Net 区分实例并同时进行分割,无需额外步骤即可一次性实现实例分割。为简单起见,我们在本文中将这些内核称为 semantic kernel 和 instance kernels,分别用于语义和实例分割。实例内核和语义内核的简单组合可以自然地执行全景分割,将像素分配给 an instance ID 或 a class of stuff(图1c))。</p><h3 id="group-aware-kernels">Group-Aware Kernels</h3><p>尽管 K-Net 很简单,但直接通过内核分离实例并非易事。因为实例 kernels 需要区分图像内和图像间尺度和外观不同的 objects。没有像语义类别这样的共同和明确的特征,instance kernels 需要比 static kernels 更强的判别能力。</p><img src="/2022/05/20/K-Net/NeurIPS2021_K-Net_Figure2.png" title="Figure 2. Kernel Update Head"><p>为了克服这一挑战,我们提供了一种方法,通过 kernel update head 使 kernel 以相应的像素组为条件,如图2所示。Kernel update head <span class="math inline">\(f_i\)</span> 包含三个关键步骤:组特征组装(group feture assembling)、自适应内核更新(adaptive kernel update),和内核交互(kernel interaction)。首先,使用 mask 预测 <span class="math inline">\(M_{i-1}\)</span> 组装每个像素组的 group feature <span class="math inline">\(F^K\)</span>????。由于是每个组的上下文将它们彼此区分开来,因此使用 <span class="math inline">\(F^K\)</span> 自适应地更新其对应的内核 <span class="math inline">\(K_{i-1}\)</span>。之后,内核相互交互,对图像上下文进行全面建模。 最后,获得的 group-aware kernels <span class="math inline">\(K_i\)</span> 对特征图 <span class="math inline">\(F\)</span> 进行卷积以获得更准确的 mask 预测 <span class="math inline">\(M_i\)</span>。 如图3所示,这个过程可以迭代地进行,因为更精细的分区通常会降低组特征中的噪声,从而产生更具辨别力的内核。这个过程被表述为</p><p><span class="math display">\[K_i, M_i = f_i(M_{i-1},K_{i-1},F)。\]</span></p><img src="/2022/05/20/K-Net/NeurIPS2021_K-Net_Figure3.png" title="Figure 3. K-Net for panoptic segmentation."><p>一组 learned kernels 首先与特征图 <span class="math inline">\(F\)</span> 进行卷积以预测 mask <span class="math inline">\(M_0\)</span>。然后这个 kernel update head 将 mask 预测 <span class="math inline">\(M_0\)</span>、learned kernels <span class="math inline">\(K_0\)</span> 和 特征图 <span class="math inline">\(F\)</span> 作为输入,并生成类预测、group-aware (dynamic) kenrels 和 mask 预测。生成的 mask 预测、dynamic kernels 和特征图 <span class="math inline">\(F\)</span> 被发送到下一个 kernel update head。迭代执行此过程以逐步细化 kernel 和 mask 预测。</p><p>值得注意的是,具有迭代细化的 kernel update head 是通用的,因为它不依赖于 kernel 的特性。因此,它不仅可以增强 instance kernels,还可以增强 semantic kernels。</p><h2 id="实验">实验</h2><img src="/2022/05/20/K-Net/NeurIPS2021_K-Net_Table1.png" title="Table 1. 在COCO数据集上和SOTA全景分割方法进行比较"><img src="/2022/05/20/K-Net/NeurIPS2021_K-Net_Table2.png" title="Table 2. 在COCO数据集上和SOTA实例分割方法进行比较"><img src="/2022/05/20/K-Net/NeurIPS2021_K-Net_Table3.png" title="Table 3. 在ADE20K语义分割数据集上的结果"><img src="/2022/05/20/K-Net/NeurIPS2021_K-Net_Table4.png" title="Table 4. K-Net在实例分割任务上的消融实验"><p>表4a表明 adaptive kernel update 和 kernel interaction 是高性能的必要条件。从表4b中可以看出,positional information 是有益的,positional encoding 略好于 coordinate convolution。但是两者组合没有进一步提升性能,因此在此框架中进使用了 positional enconding。表4c表明 kernel update 的轮数在第4轮的时候,性能接近饱和。最后,在 instance kernels 的数量实验中,增加 N 的数量可以提升性能,但是当 N 较大时,提升幅度变缓。</p><h4 id="可视化分析">可视化分析</h4><img src="/2022/05/20/K-Net/NeurIPS2021_K-Net_Figure4.png" title="Figure 4. kernels 及其 masks 的可视化分析"><p>Kernels 的总体分布。我们通过分析 val split 中 5000 个图像中 100 个实例内核的掩码激活平均值,仔细分析了在 K-Net 中学习到的实例内核的属性。所有 masks 都调整大小以具有类似的 <span class="math inline">\(200 \times 200\)</span> 分辨率以进行分析。如图 4a 所示,学习到的 kernels 是有意义的。不同的 kernels 专注于图像的不同区域和不同大小的对象,而每个 kernel 关注图像中靠近位置的相似大小的对象。</p><p>通过 Kernel Update 优化的 Masks。我们进一步分析了如何通过图 4b 中的 kernel update 来改进 kernel 的 mask 预测。在这里,我们采用 K-Net 进行全景分割,以彻底分析语义和实例掩码。static kernels 生成的 masks 是不完整的,例如,河流和建筑物的掩码缺失。内核更新后,分割掩码完全覆盖了内容,尽管掩码的边界仍然不理想。更多内核更新后,边界得到了细化。内核更新后实例的分类置信度也会增加。</p><h2 id="总结">总结</h2><p>本文探讨了可以在分割过程中学习分离实例的实例内核。因此,以前辅助实例分割的额外组件可以被实例内核替换,包括边界框、嵌入生成和手工制作的后处理,如 NMS、内核融合和像素分组。这种尝试首次允许通过统一的框架处理不同的图像分割任务。该框架被称为 K-Net,首先通过学习的静态内核将图像划分为不同的组,然后通过从划分组中组装的特征迭代地细化这些内核及其对图像的划分。K-Net 在全景和语义分割基准上获得了新的SOTA的单模型性能,并在最近的实例分割框架中以最快的推理速度超越了成熟的 Cascade Mask R-CNN。我们希望 K-Net 和分析能够为未来统一图像分割框架的研究铺平道路。</p>]]></content>
<summary type="html">
K-Net 统一了语义、实例和全景分割框架,它通过一组可学习的 kernels 来分割实例和语义类别,其中每个 kernel 负责为潜在实例或 stuff 类别生成 mask。为了解决区分不同实例的困难,论文提出一种 kernel update 策略,改策略使每个 kernel 能够动态并以输入图像中意义组为条件。
</summary>
<category term="Instance Segmentation" scheme="https://www.starlg.cn/tags/Instance-Segmentation/"/>
</entry>
<entry>
<title>SOTR:Segmenting Objects with Transformers [ICCV 2021]</title>
<link href="https://www.starlg.cn/2022/05/19/SOTR/"/>
<id>https://www.starlg.cn/2022/05/19/SOTR/</id>
<published>2022-05-19T02:39:42.000Z</published>
<updated>2022-05-30T14:16:36.000Z</updated>
<content type="html"><![CDATA[<p>Paper: [ICCV 2021] SOTR: Segmenting Objects with Transformers</p><p>Arxiv: https://arxiv.org/abs/2108.06747</p><p>Github: https://github.com/easton-cau/SOTR</p><h2 id="介绍">介绍</h2><p>最近 tansformer-based 模型在视觉任务上表现出令人印象深刻的性能,甚至超过了卷积神经网络。在这项工作中,作者提出了一种新颖、灵活且有效的 tranformer-based 模型用于高质量的实例分割。所提出的模型,即 Segmenting Objects with TRansformers (SOTR),简化了分割的pipeline,具有2个并行的子任务:(1)通过 transformer 预测每个实例类别,(2)使用多层级上采样模块动态生成 segmentation mask。SOTR 可以分别通过特征金字塔(FPN)和 twin transformer 有效地提取较低级别的特征表示(lower-level feature representations)并不惑远程上下文依赖关系(long-range context dependencies)。同时,与原始的 tranformer 相比,多提出的 twin transformer 在时间和资源上都是有效的,因为只涉及行和列注意力(a row and a column attention )来编码像素。此外,SOTR 很容易与各种 CNN backbones 和 transformer 模型变体结合,从而显著提高分割精度和收敛性。</p><img src="/2022/05/19/SOTR/ICCV21_SOTR_Figure1.png" title="Figure 1. SOTR 选择的输出。大目标和具有复杂形状的目标都可以很好的被分割。"><p>现代实例分割方法通常建立在 CNN 之上并遵循先检测后分割范式,该范式由一个用于识别和定位所有目标的检测器和一个用于生成分割掩码的掩码分支组成。这这种分割方法的成功归功于以下优点,即平移和位置不变性(tanslation equivariance and location),但面临以下障碍:1)由于感受野受限,CNN 在高级视觉语义信息中相对缺少特征的连贯性(features' coherence)来关联实例,导致对大目标的次优结果;2)分割质量和推理速度都严重依赖目标检测器,在复杂场景中性能较差。</p><p>为了克服这些缺点,提出了一些 Bottom-up 的策略。Bottom-up 的方法的主要缺点是在不同场景的数据集上不稳定的聚类(例如,)和较差的泛化能力。SOTR有效学习了位置敏感特征和动态生成实例分割结果,不需要后处理聚合,不受限于边界框位置和尺寸。我们提出了一种创新的自下而上模型 SOTR,它巧妙地结合了 CNN 和 Transformer 的优势。</p><img src="/2022/05/19/SOTR/ICCV21_SOTR_Figure2.png" title="Figure 2. SOTR 框架。SOTR建立在简单的FPN主干之上,只进行了最少的修改。该模型将FPN特征P2-P6展平,并补充position embedding,再将它们输入到 Transformer 模型。在 Transformer 模型之后添加2个 head,用于预测目标类别并且生成动态卷积核(dynamic convolution kernels)。多级上采样模块将 FPN 的 P2-P5 特征和 transformer 的 P5 特征作为输入,在通过动态卷积操作生成最终的分割结果。"><img src="/2022/05/19/SOTR/ICCV21_SOTR_Figure3.png" title="Figure 3. 3种不同的 transformer 层设计。(a)原始是 transformer 编码器。为了更好的建模远程依赖并提高计算效率,我们引入不同的 transformer 层设计:(b)纯 twin transformer layer 和 (c)混合 twin transformer layer。这两种层都基于我们设计的 twin attention,依次由 column-attention 和 row-attention 组成。"><h2 id="方法">方法</h2><h3 id="transformer">Transformer</h3><p><strong>Twin attention.</strong> self-attention 是 transformer 模型的关键组件,它内在地在输入序列上的每个元素之间捕获了全图的上下文信息并且学习到了长距离的交互。然而,self-attention 具有二次时间和内存复杂性,在高维维度序列(如图像)上产生更高的计算成本,并阻碍了不同设置下的模型可扩展性。</p><p>为了解决这个问题,本文提出了 twin attention 机制使用稀疏表示简化了注意力矩阵。这个策略主要将感受野限制为固定步幅的设计块模式。它首先计算每列内的注意力,同时保持不同列中的元素独立。该策略可以在水平尺度上聚合元素之间的上下文信息(如图3(1)所示)。然后,在每一行内执行类似的注意力,以充分利用垂直尺度的特征交互(如图3(2)所示)。两个尺度中的注意力一次连接到最后一个,它具有全局感受野,覆盖了两个维度上的信息。</p><p>FPN的第i层特征定义为 <span class="math inline">\(F_i \in \mathbb{R}^{H \times W \times C}\)</span>,SOTR 首先将这个特征图切分成 <span class="math inline">\({N \ast N}\)</span> 个 patches <span class="math inline">\({P_i \in \mathbb{R}^{N \times N \times C}}\)</span> ,然后将它们沿垂直和水平方向堆叠成固定的 blocks。Position embeddings 被添加到这些 blocks 中以保留位置信息,这意味着列和行的 position embedding 空间是 <span class="math inline">\(1 \ast N \ast C\)</span> 和 <span class="math inline">\(N \ast 1 \ast C\)</span>。两个注意力层都采用了 multi-head attention 机制。为了便于多层连接和后处理,在 twin attention 中所有子层都会产生 <span class="math inline">\(N \times N \times C\)</span> 的输出。Twin attention 机制可以有效地将内存和计算复杂度从标准的 <span class="math inline">\(O((H \times W)^2)\)</span> 降低到 <span class="math inline">\(O(H \times W^2 + W \times H^2)\)</span>。</p><p><strong>Transformer Layer.</strong> 在本节中,我们介绍3个不同的基于编码器的 transformer 层作为我们的基本构建块,如图3所示。原始的 transformer 层类似于 NLP 中使用的编码器,如图3a所示,它包括2个部分:1)经过 a layer normalization 后的 a multi-head self-attention 机制,以及 2)在 a layer normalization 之后的 a multi-layer perception。除此之外,使用残差连接来连接这两个部分。最后,可以得到一个多维序列特征作为这些 transformer 层的 K 个串联的输出,用于不同功能 heads 的后续预测。</p><p>为了在计算成本和特征提取效果之间做出最佳权衡,我们遵循原来的 Transformer 层设计,仅在纯 Twin Transformer 层中用 twin attention 代替 multi-head attention ,如图3b所示。为了进一步提升 twin tranformer 的性能,我们还设计了图 3c 所示的 hybrid twin transformer。它将两个 <span class="math inline">\(3 \times 3\)</span> 卷积层通过 Leaky ReLU 层连接到每个 twin attention 模块。 假设添加的卷积操作可以作为注意力机制的有效的补充,更好地捕获局部信息并增强特征表示。</p><p><strong>Functional heads.</strong> 来自 transformer 模块的特征图被输入到不同的功能 heads 以进行后续预测。class head 包括 a single linear layer 来输出一个 <span class="math inline">\(N \times N \times M\)</span> 的分类结果,其中 <span class="math inline">\(M\)</span> 是类别的数量。由于每个 patch 只为中心落入这个 patch 的单个目标分配一个类别,如 YOLO,我们利用多级预测并在不同特征级别共享这些 heads,以进一步提高模型在不同尺度对象上的性能和效率。 Kernel head 也由 a linear layer 组成,与 class head 并行输出一个 <span class="math inline">\({N \times N \times D}\)</span> 的张量用于后续的 mask 生成,其中张量表示具有D个参数的 <span class="math inline">\(N \times N\)</span> 个卷积核。在训练期间,Focal Loss 应用于分类,而对这些卷积核的所有监督都来自最终的 mask 损失。</p><h3 id="mask">Mask</h3><p>为 instance-aware 和 position-sensitive 分割构建 mask 特征表示,一种直接的方式是对不同尺度的特征图进行预测。但是,这会增加时间和资源。收 Panoptic FPN 的启发,我们呢设计了 multi-level upsampling 模块,将每个 FPN 层级和 transformer 的特征合并为统一的 mask 特征。首先,从 transformer 模块中获取具有位置信息的相对低分辨率特征图 P5,并与 FPN 中的 P2-P4 结合执行融合。对于每个尺度的特征图,执行 <span class="math inline">\(3 \times 3\)</span> Conv,Group Norm 和 ReLU 操作。然后 P3-P5 被双线性上采样 2x、4x、8x,分别为(H4,W4)分辨率。最后,将处理后的 P2-P5 相加后,执行逐点卷积和上采样以创建最终统一的 <span class="math inline">\(H \times W\)</span> 特征图。</p><p>对于实例掩膜预测,SOTR 通过对上述统一特征图执行动态卷积操作,为每个 patch 生成 mask。给定来自 kernel head 的预测卷积核 <span class="math inline">\(k \in RN \times N \times D\)</span>,每个 kernel 负责对应的 patch 中实例 mask 的生成。具体操作可以表示如下:</p><p><span class="math display">\[Z^{H \times W \times N^2} = F^{H \times W \times C} \ast K^{N \times N \times D}\]</span></p><p>其中 <span class="math inline">\(\ast\)</span> 表示卷积操作,<span class="math inline">\(Z\)</span> 是最终生成的 mask,维度为 <span class="math inline">\(H \times W \times N^2\)</span>。其中,<span class="math inline">\(D\)</span> 的取值取决于卷积核的形状,也就是说,D等于 <span class="math inline">\(\lambda 2 C\)</span>,其中 <span class="math inline">\(\lambda\)</span> 的 kernel 大小。最终的实例分割 mask 可以由 Matrix NMS[37] 生成,每个 mask 由 Dice Loss 独立监督。</p><h2 id="实验">实验</h2><img src="/2022/05/19/SOTR/ICCV21_SOTR_Table2.png" title="Table 2. 对比不同 transformer 的结果。"><p>用于特征编码的 Transformer。我们用三种不同的 transformers 来衡量我们模型的性能。这些变体的结果如表 2 所示。我们提出的 pure and hybrid twin transformers 在所有指标上都大大超过了 original transformer,这意味着 twin transformer 架构不仅成功地捕获了垂直和水平维度上的远程依赖关系,而且是更适合与 CNN 主干结合来学习图像的特征和表示。对于 pure and twin transformers,后者效果更好。我们假设原因是 <span class="math inline">\(3 \ast 3\)</span> Conv 可以提取局部信息并改进特征表达以增强 twin transformer 的合理性。</p><img src="/2022/05/19/SOTR/ICCV21_SOTR_Figure4.png" title="Figure 4. SOTR 的表现。"><p>我们展示了掩码特征的可视化。对于每一行,左边是原始图片,右边是与其对应的 positional-sensitive mask。</p><img src="/2022/05/19/SOTR/ICCV21_SOTR_Figure5.png" title="Figure 5. 和其他方法实例分割结果细节对比。"><p>我们将我们方法的分割结果与 Mask R-CNN 、Blendmask 和 SOLOv2 进行比较。代码和训练好的模型由原作者提供。 所有模型都使用 ResNet-101-FPN 作为主干,并且基于 Pytorch 和 Detectron2。 我们的 Mask 质量更好。</p><img src="/2022/05/19/SOTR/ICCV21_SOTR_Table5.png" title="Table 5. 动态卷积核 vs. 静态卷积核。可学习的卷积核可以显着改善结果。"><p>动态卷积。对于 mask 生成,我们有两种选择:以静态卷积方式直接输出实例 mask 或通过动态卷积操作连续分割对象。前者不需要额外的 functional head 来预测卷积核,而后者包括卷积核以在融合特征的帮助下生成最终 mask。 我们在表 5 中比较了这两种模式。如图所示,没有 twin transformer 的 SOTR 实现了 39.7% 的 AP,表明 twin transformer 带来了 0.5% 的增益。 此外,动态卷积策略可以将性能提高近 1.5% AP。 原因是:一方面,由于非线性,动态卷积显着提高了表示能力。另一方面,动态卷积比静态卷积有助于更好更快地收敛。</p><img src="/2022/05/19/SOTR/ICCV21_SOTR_Table7.png" title="Table 7. 实验结果对比。">]]></content>
<summary type="html">
Segmenting Objects with TRansformers (SOTR) 简化了分割的pipeline,它具有2个并行的子任务:(1)通过 transformer 预测每个实例类别,(2)使用多层级上采样模块动态生成 segmentation mask。SOTR 可以分别通过特征金字塔(FPN)和 twin transformer 有效地提取较低级别的特征表示(lower-level feature representations)并不惑远程上下文依赖关系(long-range context dependencies)。同时,与原始的 tranformer 相比,多提出的 twin transformer 在时间和资源上都是有效的,因为只涉及行和列注意力(a row and a column attention )来编码像素。
</summary>
<category term="Transformer" scheme="https://www.starlg.cn/tags/Transformer/"/>
<category term="Instance Segmentation" scheme="https://www.starlg.cn/tags/Instance-Segmentation/"/>
</entry>
<entry>
<title>Efficient DETR</title>
<link href="https://www.starlg.cn/2021/12/27/Efficient-DETR/"/>
<id>https://www.starlg.cn/2021/12/27/Efficient-DETR/</id>
<published>2021-12-27T04:03:47.000Z</published>
<updated>2022-05-30T14:16:36.000Z</updated>
<content type="html"><![CDATA[<p>Paper:https://arxiv.org/abs/2104.01318</p><p>Code:暂未开源</p><h2 id="摘要">摘要</h2><p>最近提出的端到端 transformer 检测器 (例如 DETR 和 Deformable DETR)有一个堆叠的 6 个解码器层的级联结构,以用来迭代更新 object queries,否则它们的性能会严重下降。在这篇论文中,作者研究了 object containers 的随机初始化,object containers 包含 object queries 和 reference points,用于负责多次迭代的要求。基于这个发现,作者提出 Efficient DETR,一种用于端到端目标检测的简单高效的流程。通过同时利用密集检测(dense detection)和稀疏集合检测(sparse set detection),Efficient DETR 在初始化 object containers 利用密集先验,并且弥补了 1 层 decoder 结构和 6 层 decoder 结构的性能差异。在 MS COCO 上进行的实验表明,该的方法只有 3 encoder layers 和 1 decoder layer,与 state-of-the-art 目标检测方法相比,性能具有竞争力。 Efficient DETR 在拥挤的场景中也很强大,它在 CrowdHuman 数据集上大大优于当前的目标检测器。</p><img src="/2021/12/27/Efficient-DETR/Efficient-DETR_Figure_1.png" title="Figure 1. 之前端到端检测器和 Efficient DETR 比较"><h2 id="介绍">介绍</h2><p>最近,DETR 提出一种基于 encoder-decoder transformer 架构和二部图匹配构建的端到端框架,改框架可以直接预测一组边界框,而无需后处理(NMS)。然而,DETR 需要比当前主流检测器 10 到 20 倍训练的 epoch 才能收敛,并且在检测小目标方面表现出相对较低的性能。</p><p>DETR 的检测流程可以抽象成 Fig.1(a)。我们首先定义 <em>object container</em> 作为一种结构信息的容器,它可以包含不同种的目标特征。object queries 和 reference points 都属于 object container,因为 object queries 和 reference points 可以表示抽象的特征和目标的位置信息。一组随机初始化的 object containers 被送入特征精练器(feature refiner)中,用于和从图像提取的特征做交互作用。具体来说,具有 cross-attention 模块的 6 decoder layers 扮演了 cascade feature refiner的角色,它迭代的更新 object containers。这个精练之后的 object containers 有助于 DETR最后的预测。此外,图片的特征是通过 feature extractor 提取到的,在 DETR 中的 feature extractor 包括 a CNN backbone 和 6 encoder layer。总之,图像和随机初始化的 object containers 通过 feature extractor 和 cascade feature refiner 得到最终的结果。在这个流程中,DETR 和 Deformable DETR 都具有 6-encoder 和 6-decoder transformer 架构。我们假设这种结构是 DETR 系列 实现目标检测高精度的关键。</p><p>在这篇论文中,作者研究了 DETR 的各个组成部分,并且了解其机制。作者通过大量实验发现具有<strong>额外辅助损失</strong>的 decoder layer 对性能的贡献最大。transformer decoders 迭代地用特征图与 object containers 进行交互。作者探索了 DETR 中 object containers 随机初始化和多次修正的要求导致了收敛缓慢。</p><p>然而,很难直接分析 object queries,因为它们只是一组抽象特征。Deformable DETR 为 object queries 提出了 reference points。Reference points 是一个2维张量,表示猜想的框的中心点。通过可视化训练模型的参考点,作者发现它们被证明仅用作基于锚点的方法中的 anchor points。 此外,作者报告了 1-decoder 结构的 reference points 的不同初始化导致巨大性能差异。 问题来了:<strong>对于端到端检测器中的 object containers,哪种初始化更好?</strong></p><h3 id="探索-detr">探索 DETR</h3><h4 id="回归-detr">回归 DETR</h4><p><strong>Encoder and decoder.</strong> DETR 系列方法是在一个 encoder-decoder transformer 架构上。encoder 和 decoder 都级联了 6 个相同的层。An encoder layer 由 a mullti-head self-attention 和 a feed-forward network (FFN) 组成,而 a decoder layer 有一个额外的 multi-head cross attention layer。encoder layer 起到了与卷积类似的作用,并且从具有 multi-head self-attention 的 CNN backbone 上提取上线文特征。在 decoders 中,一组 256 维的 object queries 与整个图像的 encoder features 进行交互,并通过 multi-head cross attention 聚合信息。辅助二分匹配损失应用于每一个 decoder layer。表 1 说明 DETR 对 decoder layer 的数量更加的敏感,这意味着 decoder 比 encoder 对于 DETR 来说更加的重要。特别是,采用具有 3-encoders 和 3-decoders 的 DETR 作为我们的 baseline。如果在 decoder 中移除 2 层 layer,AP 可以减少约 9.3。相比之下,删除 encoder 中的 2 层 layer,仅导致 1.7 AP 的下降。</p><img src="/2021/12/27/Efficient-DETR/Efficient-DETR_Table_1.png" title="Table 1. Encoder vs. Decoder"><p><strong>为什么 decoder 比 encoder 更加的重要?</strong> 它们都是在一个级联的框架下,但是 decoder 的每个相同的层上都有 一个额外的辅助损失。在表 1 中,我们发现这个辅助的解码损失是 DETR 对 decoder layer 数量敏感的主要原因。在没有辅助损失的情况下,encoder 和 decoder 的行为趋于相同。我们指出辅助解码损失在更新 query feature 时引入了强监督,这使得 decoder 更高效。decoder 级联结构通过逐层的辅助损失来精练特征。迭代次数越多,辅助解码监督越有效。</p><img src="/2021/12/27/Efficient-DETR/Efficient-DETR_Table_2.png" title="Table 2. DETR 中 decoder layer 数量的影响"><p>为了进一步套索 decoder 的级联结构,作者尝试了不同数量的 decoder layer。表 2 显示,随着级联次数的减少,性能显著降低。6-layer decoder 和 1-layer decoder 存在 10.3 AP 的巨大下降。值得注意的是,在 decoder 中每次迭代后,仅仅 object queries 得到了更新。Object queries 与性能密切相关,因为最终的预测是来自 object queries 并由检测头预测。然而,object queries 在训练开始时是随机初始化的。我们假设这种随机初始化不能提供良好的初始状态,这可能是 DETR 需要 6 次迭代的级联结构来实现竞争性能的原因。</p><h4 id="object-containers-初始化的影响">Object Containers 初始化的影响</h4><p>基于前面的分析,object queries 的初始是值得研究的。object query 属于object container 中的特征信息。object query 被定义为可学习的位置嵌入,它是一个 256 维的抽象张量,因此很难分析。然而,我们观察到 DETR 中的每个 object query 都学会了专注于具有多种操作模式的特定区域和框的大小。我们假设研究 object query 的空间投影可能有助于直观的理解。</p><p>Deformable DETR 引入了一个新的组件,即与 object queries 相关的 reference point。Reference points 是表示框中心预测的 2 维张量,属于 object container 的位置信息。此外,参考点是通过线性投影从 256 维 object queries 中预测的。它们可以作为 object query 在 2D 空间中的投影,并提供 object query 中位置信息的直观表示。Reference point 和 object query 在 decoder 迭代期间更新,并作用到最终结果。</p><p>考虑到 reference points 直观地表示 object queries 中的位置信息,开始对其进行研究。在传递到 decoder layers 之前,reference points 试试通过随机初始化的 object queries 的线性投影生成的,如图 3(a)所示。我们称这个过程为参考点的初始化。图 2 展示了模型收敛之后的参考点。初始阶段的参考点均匀分布在图像上,覆盖整个图像区域。这种初始化类似于 anchor-based detectors 的 anchor points 的生成。随着迭代阶段的增加,reference points 逐渐聚集到前景的中心,最终在最后阶段几乎覆盖了所有的前景。直观的说,reference points 充当定位前景的 anchor points,并使得注意力模块专注于前景周围的一小组关键采样点。</p><p>在研究了 reference points 的更新之后,我们开始探索它们的初始化,这就是参考点的生成方式。对于剩下的部分,我们将 reference points 和 object queries 的初始化称为 object containers 的初始化。</p><img src="/2021/12/27/Efficient-DETR/Efficient-DETR_Figure_3.png" title="Figure 3. 3 种不同的初始化方法"><img src="/2021/12/27/Efficient-DETR/Efficient-DETR_Table_3.png" title="Table 3. Reference points 不同初始化的影响"><p><strong>Reference Point 不同的初始化。</strong> 在 anchor-based detectors 中,anchors 的生成对模型的性能有一个较大的影响。anchors 在每一个滑动窗口的位置生成,并且为目标可能出现的位置提供了一个合适的初始化。在 reference points 的初始化中,它的作用类似于 anchor points,可能对 Deformable DETR 的性能有影响。作者针对级联(6-decoder)和非级联(1-decoder)结构尝试了不同的初始化,并且比较它们的性能。如表 3 所示,不同的初始化在非级联结构上表现确实不同。相反,在级联结构上它们有一个相似的性能。与推测一致,网格(grid)初始化是在滑动窗口的中心生成 reference points,它的结果类似于可学习的初始化。然而,另外两种初始化,中心(center)和边界(border),在没有迭代的情况下,导致了准确率的巨大下降。为了更好的分析,我们再几个阶段可视化了不同初始化的 reference points,如图 4 所示。随着迭代的增加,它们的 reference points 往往处于相同的分布,并且在最终的阶段以相似的模式定位前景。总之,reference points 的不同初始化导致模型在非级联结构中性能的巨大差异,而级联结构通过多次迭代带来了它们的差距。 从另一个角度来看,更好的 reference points 初始化可能会提高非级联结构的性能。</p><img src="/2021/12/27/Efficient-DETR/Efficient-DETR_Figure_4.png" title="Figure 4. Reference points 的不同初始化"><p><strong>我们能否通过更好的初始化来弥补 1-decoder 结构和 6-decoder 结构的差距?</strong></p><p>基于以上的发现,reference points 更好的初始化可以提升性能,尤其对 1-decoder 结构。考虑到 reference points 类似于 anchor points,我们假设在主流检测器中 anchor 的先验可以帮助解决这个问题。在当前的 two-stage 检测器中,region proposals 通过 RPN 以一个滑窗的操作生成的,它可以针对前景提供一组类比无关的候选区域。</p><p>RPN 使用 dense 的先验生成前景的粗糙的边界框。如图 3(b) 所示,我们将 RPN 层添加到从 encoder 出来的 dense feature 上。RPN head 共享 encoder 的特征,并且为每一个 anchor 预测 objectness score 和 偏移。得分较高的边界框被选择作为 region proposals。然后,我们再非级联的结构中使用这些 region proposals 的中心作为 reference points 的初始化。表 3 表明了这种方式带来的较大的性能提升。图 5 可视化了这个方法,这里 reference points 作为初始化阶段得到了与其他方法最后阶段相似的分布。Region proposals 以一种更合理的分布初始化了 reference points,提升了非级联结构的 Deformable DETR 的准确性。</p><img src="/2021/12/27/Efficient-DETR/Efficient-DETR_Table_4.png" title="Table 4. Reference point 和 object query 使用 dense 先验作为初始化。"><p>如表 4 所示,使用 dense 先验作为 reference point 的初始化,使得它达到了一个更好的初始化状态,并且在 1-decoder 结构中带来了显著的性能提升。然而,reference point 仅仅是 object query 的空间映射,object query 包含额外的抽象信息。因此,怎么同时使用 dense 先验初始化 256-d 的 object feature呢?</p><p>直观上,对于 proposal 初始化中的每个 reference point,我们从特征图中选择其对应的特征,即来自 encoder 的 256-d 张量,作为其 object query 的初始化。我们的方法如图 3(c) 所示。 在表 4 中,我们的方法将 1-decoder 结构进一步改进了 3 AP。此外,仅使用 dense 先验初始化 object query 并使用没有reference point 的原始 decoder 也可以显着改善 baseline。</p><p>这些结果表明 object container 的初始状态,包括 Deformable DETR 中的 reference point 和 object query,与非级联结构的性能高度相关。RPN 中的 proposals 信息提供了更好的初始化,有可能通过 dense 先验提高性能。基于我们的研究,我们提出了 Efficient DETR,它能够缩小 1-decoder 结构和 6-decoder 结构之间的性能差距。</p><h2 id="efficient-detr">Efficient DETR</h2><img src="/2021/12/27/Efficient-DETR/Efficient-DETR_Figure_5.png" title="Figure 5. Efficient DETR"><p>Efficient DETR 包含 3 encoder layers 和仅仅 1 decoder layer,并且在 decoder 中没有级联结构。这个框架如图 5 所示。Efficient DETR 包含两个部分:dense 和 sparse。Dense 部分在来自 encoder 的 dense 特征上做预测。它从 dense 的预测结果中选择 top-k proposals。这个 4-d proposals 和它对应的 256-d feature 作为 reference points 和 object queries 的初始化。在 sparse 部分,object containers(包含 reference points 和 object queries )使用 dense 先验作为初始化,并且送入到 1-layer decoder,使其与 encoder feature 做信息交互更新特征。最终的预测结果来自于这个更新之后的 object containers。</p><h2 id="实验部分">实验部分</h2><img src="/2021/12/27/Efficient-DETR/Efficient-DETR_Table_5.png" title="Table 5. 在 COCO 2017 val 数据集上,和其他方法的结果比较">]]></content>
<summary type="html">
Efficient DETR 一种用于端到端目标检测的简单高效的流程,它通过同时利用密集检测(dense detection)和稀疏集合检测(sparse set detection),Efficient DETR 在初始化 object containers 利用密集先验,并且弥补了 1 层 decoder 结构和 6 层 decoder 结构的性能差异。
</summary>
<category term="Object Detection" scheme="https://www.starlg.cn/categories/Object-Detection/"/>
<category term="Object Detection" scheme="https://www.starlg.cn/tags/Object-Detection/"/>
<category term="Transformer" scheme="https://www.starlg.cn/tags/Transformer/"/>
</entry>
<entry>
<title>Deformable DETR</title>
<link href="https://www.starlg.cn/2021/11/24/Deformable-DETR/"/>
<id>https://www.starlg.cn/2021/11/24/Deformable-DETR/</id>
<published>2021-11-24T04:33:31.000Z</published>
<updated>2022-05-30T14:16:36.000Z</updated>
<content type="html"><![CDATA[<p>Paper:https://arxiv.org/abs/2010.04159</p><p>Code:https://github.com/fundamentalvision/Deformable-DETR</p><h2 id="介绍">介绍</h2><p>最近提出的 DETR 消除了目标检测中很多手工设计的组件,然而降低了精度。除此之外,由于受到 Tranformer attention 模块在处理图片特征的限制,导致它收敛很慢,并且限制了特征空间分辨率。</p><p>为了解决上述问题,论文提出了 Deformable DETR,它的 <strong>attention 模块</strong>仅仅关注 <strong>参考点 附近的一组关键采样点</strong>。Deformable DETR 可以获得比 DETR 更好的性能(尤其是在小物体上),并且训练次数减少了 10 倍。</p><img src="/2021/11/24/Deformable-DETR/20211124_Deformable_DETR_Figure1.png" title="Deformable DETR object detector"><p>DETR存在的两个问题:(1)比起现存的目标检测器,它的收敛要求太长时间的训练周期。例如,在COCO数据集上,DETR 需要 500 epochs 才能收敛,而这大约比 Faster RCNN 慢了 10~20 倍。(2)DETR 在检测小目标上存在较低的性能。当前的检测器通常利用多尺度特征,这这些特征上小目标可以从高分辨率特征上被检测。然而,高分辨率的特征图给 DETR 带来的严重的计算代价。 上述问题主要归因于 Transformer 组件在处理图像特征图方面的不足。在初始化的时候,attention modules 将几乎统一的注意力权重投射到特征图中的所有像素。让学习注意力权重专注于稀疏有意义的位置,长时间的训练周期是必要的。 另一方面,Transformer 编码器中的注意力权重计算是像素数的平方计算量。 因此,处理高分辨率特征图具有非常高的计算和内存复杂性。</p><p>在图片领域,deformable 卷积是一种强有力且高效的关注稀疏空间位置的机制。它可以天然的避免上述提到的问题。然而它缺乏元素关系建模机制,这是DETR成功的关键。</p><p>在这篇论文中,作者提出 Deformable DETR,它缓解了 DETR 收敛慢和高计算复杂性的问题。它组合了 deformable 卷积的稀疏空间采样特性和 Transformer 的相关性的建模能力。论文提出的 deformable attention 模块将一小组采样位置作为所有特征图像素中重要的关键元素的预过滤器。</p><p>由于其快速收敛以及计算和内存效率,Deformable DETR 为我们开辟了利用端到端对象检测器变体的可能性。 作者探索了一种简单有效的迭代边界框细化(iterative bounding box refinement)机制来提高检测性能。 论文还尝试了一个 two-stage Deformable DETR,其中 region proposal 也是由 Deformable DETR 的变体生成的,它们被进一步输入 decoder 以进行 iterative bounding box refinement。</p><h2 id="回顾-transformer-和-detr">回顾 Transformer 和 DETR</h2><h3 id="multi-head-attention-in-transformers">Multi-Head Attention in Transformers</h3><p>Transformers 是针对机器翻译任务设计的一种基于注意力机制的网络结构。给一个 query 元素(例如,在一个输出句子中的一个目标单词)和一组 key 元素(例如,在输入句子中的原单词),multi-head attention 模块根据注意力权重自适应地汇聚关键信息,这个注意力权重可以测量 query-key 对 质检的一致性。为了允许让模型从不同表示子空间和不同位置中关注信息,不同 attention heads 的输出是使用学到的权重线性聚合的结果。Multi-head attention 特征可以计算为:</p><p><span class="math display">\[\operatorname{MultiHeadAttn}\left(\boldsymbol{z}_{q}, \boldsymbol{x}\right)=\sum_{m=1}^{M} \boldsymbol{W}_{m}\left[\sum_{k \in \Omega_{k}} A_{m q k} \cdot \boldsymbol{W}_{m}^{\prime} \boldsymbol{x}_{k}\right]\]</span></p><p><span class="math inline">\(q \in \Omega_{q}\)</span> 表示 一个 query 元素的索引,其特征表示为 <span class="math inline">\(z_q \in \mathbb{R}^{C}\)</span></p><p><span class="math inline">\(k \in \Omega_{k}\)</span> 表示一个 key 元素的索引,其特征表示为 <span class="math inline">\(x_k \in \mathbb{R}^C\)</span></p><p><span class="math inline">\(C\)</span> 特征的维度</p><p><span class="math inline">\(M\)</span> attention head 的数量,<span class="math inline">\(m\)</span> 是 attention head 的索引</p><p><span class="math inline">\(\mathbf{W}_{m}^{\prime} \in \mathbb{R}^{C_{v}\times C}\)</span> 和 <span class="math inline">\(\mathbf{W}_{m} \in \mathbb{R}^{C_{v}\times C}\)</span> 是可学习的权重,并且 <span class="math inline">\(C_{v} = C/M\)</span></p><p><span class="math inline">\(A_{m q k} \propto \exp \left\{\frac{\mathbf{z}_{q}^{T} \mathbf{U}_{m}^{T} \mathbf{V}_{m} \mathbf{x}_{k}}{\sqrt{C_{v}}}\right\}\)</span> 是 attention 权重,它被归一化,并且 <span class="math inline">\(\sum_{k\in \Omega_{k}} A_{mqk}=1\)</span> 其中 <span class="math inline">\(\mathbf{U}_{m}\)</span> 和 <span class="math inline">\(\mathbf{V}_{m}\)</span> 也是可学习的权重。</p><p>为了消除不同空间位置的歧义,表示特征 <span class="math inline">\(x_q\)</span> 和 <span class="math inline">\(x_k\)</span> 通常是和 positional embedding 的串联/求和。</p><h3 id="detr">DETR</h3><p>对于 DETR 中的 Transformer encoder,query 和 key 元素都是特征图中的像素。输入是 ResNet 特征图(带有编码的 positional embeddings)。让 <span class="math inline">\(H\)</span> 和 <span class="math inline">\(W\)</span> 分别表示特征图的高度和宽度。 self-attention 的计算复杂度为 <span class="math inline">\(O(H^2 W^2 C)\)</span> ,随空间大小呈二次方增长。</p><p>对于 DETR 中的 Transformer dncoder,输入包括来自 encoder 的特征图和 由可学习位置嵌入(例如,N = 100)表示的 N object queries。decoder 中有两种注意力模块,即 cross-attention 和 self-attention 模块。在 cross-attention 模块中,object query 从特征图中提取特征。query 元素属于object queries,key 元素属于encoder 的输出特征图。其中,<span class="math inline">\(N_q = N\)</span>,<span class="math inline">\(N_k = H \times W\)</span>,交叉注意力的复杂度为 <span class="math inline">\(O(HWC^2 + NHWC)\)</span>。复杂性随着特征图的空间大小线性增长。在 self-attention 模块中,object queries 相互交互,以捕获它们的关系。 query 和 key 元素都是 object queries。 其中,<span class="math inline">\(N_q = N_k = N\)</span>,self-attention 模块的复杂度为 <span class="math inline">\(O(2NC^2 +N^2 C)\)</span>。 中等数量的对象查询的复杂性是可以接受的。</p><p>这主要是因为处理图像特征的注意力模块很难训练。 例如,在初始化时,cross-attention 模块几乎对整个特征图具有平均注意力。而在训练结束时,attention maps 被学习到非常稀疏,只关注对象的外轮廓(extremities)。 似乎 DETR 需要很长的训练才能学习注意力图的如此显着的变化。</p><h2 id="method">Method</h2><h3 id="ddformable-transformer-for-end-to-end-object-detection">Ddformable Transformer for End-To-End Object Detection</h3><img src="/2021/11/24/Deformable-DETR/20211124_Deformable_DETR_Figure2.png" title="deformable attention module"><h4 id="deformable-attention-module">Deformable Attention Module</h4><p><span class="math display">\[\operatorname{DeformAttn}\left(\boldsymbol{z}_{q}, \boldsymbol{p}_{q}, \boldsymbol{x}\right)=\sum_{m=1}^{M} \boldsymbol{W}_{m}\left[\sum_{k=1}^{K} A_{m q k} \cdot \boldsymbol{W}_{m}^{\prime} \boldsymbol{x}\left(\boldsymbol{p}_{q}+\Delta \boldsymbol{p}_{m q k}\right)\right]\]</span></p><p>这里 <span class="math inline">\(m\)</span> attention head 的索引,<span class="math inline">\(k\)</span> 采样 keys 的索引,<span class="math inline">\(K\)</span> 是总采样的 key 的数量 <span class="math inline">\((K \ll HW)\)</span>。<span class="math inline">\(\Delta p_{mqk}\)</span> 和 <span class="math inline">\(A_{mqk}\)</span> 是采样的偏置和在 <span class="math inline">\(m^{th}\)</span> attention head 上的 <span class="math inline">\(k^{th}\)</span> 采样点的 attention weight。</p><h4 id="multi-scale-deformable-attention-module">Multi-scale Deformable Attention Module</h4><p><span class="math display">\[\operatorname{MSDeformAttn}\left(\boldsymbol{z}_{q}, \hat{\boldsymbol{p}}_{q},\left\{\boldsymbol{x}^{l}\right\}_{l=1}^{L}\right)=\sum_{m=1}^{M} \boldsymbol{W}_{m}\left[\sum_{l=1}^{L} \sum_{k=1}^{K} A_{m l q k} \cdot \boldsymbol{W}_{m}^{\prime} \boldsymbol{x}^{l}\left(\phi_{l}\left(\hat{\boldsymbol{p}}_{q}\right)+\Delta \boldsymbol{p}_{m l q k}\right)\right]\]</span></p><h4 id="deformable-transformer-encoder">Deformable Transformer Encoder</h4><p>由于提出的 multi-scale deformable attention 可以再不同多尺度特征层上交换信息,所以没有使用 FPN 结构。</p><p>在 encoder 中 multi-scale deformable attention 模块的应用中,输出是与输入具有相同分辨率的多尺度特征图。key 和 query 元素都是来自多尺度特征图的像素。对于每一个 query 像素,这个参考点(reference point)就是它自己。为了识别每个 query 像素位于哪个特征级别,除了 positional embedding 之外,我们还向特征表示中添加了 a scale-level embedding,表示为 <span class="math inline">\(e_l\)</span>。与固定编码的 positional embedding 不同,scale-level embedding <span class="math inline">\(\{e_l\}^L_l=1\)</span> 是随机初始化并与网络联合训练。</p><h4 id="deformable-transformer-decoder">Deformable Transformer Decoder</h4><p>decoder 中有 cross-attention 和 self-attention 模块。这两种注意力模块的 query 元素都是 object queries。在 cross-attention 模块中,object queries 从特征图中提取特征,其中 key 元素是来自 encoder 的输出特征图。在 self-attention 模块中,object queries 相互交互,其中 key 元素是 object queries。由于我们提出的 deformable attantion 模块是为处理卷积特征图作为 key 元素而设计的,因此我们仅将每个 cross-attention 模块替换为 multi-scale deformable attention 模块,而保持 self-attention 模块不变。对于每个 object query,参考点 <span class="math inline">\(\hat p_q\)</span> 的二维归一化坐标是从其 object query embedding 中通过可学习的线性投影和 <span class="math inline">\(\mathrm{sigmoid}\)</span> 函数预测的。</p><p>因为 multi-scale deformable attention 模块提取参考点(reference point)周围的图像特征,我们让检测头将边界框预测为相对偏移,也就是参考点进一步降低优化难度。 参考点用作框中心的初始猜测。检测头预测相对偏移,也就是参考点。这样,学习到的 decoder attention 将与预测的边界框有很强的相关性,这也加速了训练收敛。</p><p>通过在 DETR 中用 deformable attention 模块替换 Transformer attention 模块,我们建立了一个高效且快速收敛的检测系统,称为 Deformable DETR。</p><h3 id="其他改进-和-变体">其他改进 和 变体</h3><p>Iterative Bounding Box Refinemen</p><p>Two-Stage Deformable DETR</p><h2 id="实验结果">实验结果</h2><img src="/2021/11/24/Deformable-DETR/20211124_Deformable_DETR_Figure3.png" title="Convergence curves of Deformable DETR and DETR-DC5."><p>由上图可以看出,Deformable DETR 明显提升了训练速度。</p><img src="/2021/11/24/Deformable-DETR/20211124_Deformable_DETR_Table1.png"><img src="/2021/11/24/Deformable-DETR/20211124_Deformable_DETR_Table3.png"><h2 id="论文中的符号说明">论文中的符号说明</h2><img src="/2021/11/24/Deformable-DETR/20211124_Deformable_DETR_Table4.png" title="Table 4">]]></content>
<summary type="html">
Deformable DETR 缓解了 DETR 收敛慢和高计算复杂性的问题。它组合了 deformable 卷积的稀疏空间采样特性和 Transformer 的相关性的建模能力。论文提出的 deformable attention 模块将一小组采样位置作为所有特征图像素中重要的关键元素的预过滤器。
</summary>
<category term="Object Detection" scheme="https://www.starlg.cn/categories/Object-Detection/"/>
<category term="Object Detection" scheme="https://www.starlg.cn/tags/Object-Detection/"/>
<category term="Transformer" scheme="https://www.starlg.cn/tags/Transformer/"/>
</entry>
<entry>
<title>复式记账 Beancount 使用</title>
<link href="https://www.starlg.cn/2019/07/13/Beancount-01/"/>
<id>https://www.starlg.cn/2019/07/13/Beancount-01/</id>
<published>2019-07-13T04:00:07.000Z</published>
<updated>2022-05-30T14:16:36.000Z</updated>
<content type="html"><![CDATA[<h1 id="beancount">Beancount</h1><h2 id="beancount-安装">Beancount 安装</h2><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"># 首先安装 beancount</span><br><span class="line">pip install beancount</span><br><span class="line"># 然后安装 fava</span><br><span class="line">pip install fava</span><br></pre></td></tr></table></figure><p>Fava 是复式簿记软件 Beancount 的 Web 界面,侧重于功能和可用性,使用非常友好。</p><p>我们可以先使用 <code>bean-exampl</code> 生成一个 <code>Beancount</code> 文件,文件的后缀名可以自己定义,一般用<code>.bean</code>或<code>.beancount</code>:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br></pre></td><td class="code"><pre><span class="line">(base) XX@XX:~$ mkdir MyBean</span><br><span class="line">(base) XX@XX:~$ cd MyBean/</span><br><span class="line">(base) XX@XX:~/MyBean$ ls</span><br><span class="line">(base) XX@XX:~/MyBean$ bean-example > example.bean</span><br><span class="line">INFO : Generating Salary Employment Income</span><br><span class="line">INFO : Generating Expenses from Banking Accounts</span><br><span class="line">INFO : Generating Regular Expenses via Credit Card</span><br><span class="line">INFO : Generating Credit Card Expenses for Trips</span><br><span class="line">INFO : Generating Credit Card Payment Entries</span><br><span class="line">INFO : Generating Tax Filings and Payments</span><br><span class="line">INFO : Generating Opening of Banking Accounts</span><br><span class="line">INFO : Generating Transfers to Investment Account</span><br><span class="line">INFO : Generating Prices</span><br><span class="line">INFO : Generating Employer Match Contribution</span><br><span class="line">INFO : Generating Retirement Investments</span><br><span class="line">INFO : Generating Taxes Investments</span><br><span class="line">INFO : Generating Expense Accounts</span><br><span class="line">INFO : Generating Equity Accounts</span><br><span class="line">INFO : Generating Balance Checks</span><br><span class="line">INFO : Outputting and Formatting Entries</span><br><span class="line">INFO : Contextualizing to Realistic Names</span><br><span class="line">INFO : Writing contents</span><br><span class="line">INFO : Validating Results</span><br><span class="line">(base) XX@XX:~/MyBean$ ls</span><br><span class="line">example.bean</span><br></pre></td></tr></table></figure><p>运行 Beancount: <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">(base) XX@XX:~/MyBean$ fava example.bean</span><br><span class="line">Running Fava on http://localhost:5000</span><br></pre></td></tr></table></figure></p><p>在浏览器上打开 http://localhost:5000 ,就可以看到运行界面,如下:</p><img src="/2019/07/13/Beancount-01/beancount-fava-interface.png" title="Beancount 运行界面"><h2 id="example.bean-文件分析">example.bean 文件分析</h2><p>复式记账的最基本的特点就是以账户为核心,Beancount的系统整体上就是围绕账户来实现的。之前提到的会计恒等式中有资产、负债和权益三大部分,现在我们再增加两个类别,分别是收入和支出。Beancount系统中预定义了五个分类:</p><ul><li>Assets 资产</li><li>Liabilities 负债</li><li>Equity 权益(净资产)</li><li>Expenses 支出</li><li>Income 收入</li></ul><h3 id="表头信息">表头信息</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">;; -*- mode: org; mode: beancount; -*-</span><br><span class="line">;; Birth: 1980-05-12</span><br><span class="line">;; Dates: 2017-01-01 - 2019-07-12</span><br><span class="line">;; THIS FILE HAS BEEN AUTO-GENERATED.</span><br><span class="line">* Options</span><br><span class="line"></span><br><span class="line">option "title" "Example Beancount file"</span><br><span class="line">option "operating_currency" "USD"</span><br></pre></td></tr></table></figure><p>Beancount 文件中注释使用<code>;</code>作为标记。</p><p>这里定义了项目的名词:<code>Example Beancount file</code>,和使用的货币种类:美元 <code>USD</code>。我们如果想使用人民币,可以同时添加 <code>CNY</code>,例如:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">option "operating_currency" "CNY"</span><br></pre></td></tr></table></figure><h3 id="assets-资产">Assets 资产</h3><p>顾名思义,<strong>Asserts</strong> 就相当于我们的存放 <strong>资产的账户</strong>,如果启用一个账户就使用 <code>open</code> 命令。</p><p>第一列是账户启用时间,第二列是命令,第三列是资产(Assets)名,最后一列是使用的货币种类。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line">* Assets</span><br><span class="line"></span><br><span class="line">1990-09-04 open Assets:Cash:CNY CNY ; 人民币现金账户</span><br><span class="line">1990-09-04 open Assets:Cash:USD USD ; 美元现金账户</span><br><span class="line"></span><br><span class="line">1990-09-04 open Assets:Bank:China:CCB:CardXXX1 CNY ; 银行账户</span><br><span class="line">1990-09-04 open Assets:Bank:China:CCB:CardXXX8 CNY ; 银行账户</span><br><span class="line"></span><br><span class="line">1990-09-04 open Assets:Account:China:Alipay CNY ; 支付宝账户</span><br><span class="line">1990-09-04 open Assets:Account:China:WeChat CNY ; 微信账户</span><br><span class="line"></span><br><span class="line">1990-09-04 open Assets:Stock:China:GTJA2818 CNY ; 股票账户</span><br></pre></td></tr></table></figure><p>我的命名规则是:资产:账户类型:国别:(银行缩写:银行卡号)/(账户名)</p><h3 id="income-收入">Income 收入</h3><p>这里定义我们的 <strong>收入来源</strong>,同样如果启用一个收入来源就使用 <code>open</code> 命令。</p><p>第一列是启用时间,第二列是命令,第三列是收入来源,最后一列是使用的货币种类。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">* Income</span><br><span class="line"></span><br><span class="line">1990-09-04 open Income:China:XXXCompany:Salary CNY</span><br><span class="line">1990-09-04 open Income:China:PartTimeJob:Salary CNY</span><br><span class="line">1990-09-04 open Income:China:Home:RedPacket CNY</span><br><span class="line">1990-09-04 open Income:China:Fund:Tianhong CNY</span><br></pre></td></tr></table></figure><h3 id="expenses-支出">Expenses 支出</h3><p>这里我们定义 <strong>花费支出</strong>,我根据自己的花销,把花费支出定义为 5 大组类,分别是:Food,Transport,Life,Fun,Health,Home,其中每个大类又有若干子类。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br></pre></td><td class="code"><pre><span class="line">* Expenses</span><br><span class="line"></span><br><span class="line">1990-09-04 open Expenses:Food:Groceries ; 杂货店</span><br><span class="line">1990-09-04 open Expenses:Food:Restaurant ; 餐馆</span><br><span class="line">1990-09-04 open Expenses:Food:Canteen ; 食堂</span><br><span class="line">1990-09-04 open Expenses:Food:Cooking ; 烹饪</span><br><span class="line">1990-09-04 open Expenses:Food:Drinks</span><br><span class="line">1990-09-04 open Expenses:Food:Fruits</span><br><span class="line"></span><br><span class="line">1990-09-04 open Expenses:Transport:TransCard</span><br><span class="line">1990-09-04 open Expenses:Transport:Airline</span><br><span class="line">1990-09-04 open Expenses:Transport:Train</span><br><span class="line">1990-09-04 open Expenses:Transport:Taxi</span><br><span class="line"></span><br><span class="line">1990-09-04 open Expenses:Life:Clothing</span><br><span class="line">1990-09-04 open Expenses:Life:RedPacket</span><br><span class="line">1990-09-04 open Expenses:Life:Sports</span><br><span class="line">1990-09-04 open Expenses:Life:Shopping</span><br><span class="line">1990-09-04 open Expenses:Life:Commodity ; 商品</span><br><span class="line">1990-09-04 open Expenses:Life:SoftwareAndGame</span><br><span class="line">1990-09-04 open Expenses:Life:Vacation</span><br><span class="line">1990-09-04 open Expenses:Life:Others</span><br><span class="line"></span><br><span class="line">1990-09-04 open Expenses:Fun:Amusement</span><br><span class="line"></span><br><span class="line">1990-09-04 open Expenses:Health:Hospital</span><br><span class="line">1990-09-04 open Expenses:Health:Drug</span><br><span class="line"></span><br><span class="line">1990-09-04 open Expenses:Study:Book</span><br><span class="line">1990-09-04 open Expenses:Study:Tuition</span><br><span class="line">1990-09-04 open Expenses:Study:Others</span><br><span class="line"></span><br><span class="line">1990-09-04 open Expenses:Home:Rent</span><br><span class="line">1990-09-04 open Expenses:Home:Water</span><br><span class="line">1990-09-04 open Expenses:Home:Electricity</span><br><span class="line">1990-09-04 open Expenses:Home:Internet</span><br><span class="line">1990-09-04 open Expenses:Home:Phone</span><br></pre></td></tr></table></figure><p>最后我们记录的花销就会以下图呈现出来:</p><img src="/2019/07/13/Beancount-01/beancount-expenses.png" title="Expenses 截图"><h3 id="liabilities-负债">Liabilities 负债</h3><p>负债这里我开启了一张信用卡。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">* Liabilities</span><br><span class="line"></span><br><span class="line">1990-09-04 open Liabilities:China:CreditCard:CCB:CardXXX8 CNY</span><br></pre></td></tr></table></figure><h3 id="equity-权益净资产">Equity 权益(净资产)</h3><p>目前我只设置了一个 Equity 账户 Equity:Opening-Balances,用来平衡初始资产、负债账户时的会计恒等式。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">* Equity</span><br><span class="line">1990-09-04 open Equity:Opening-Balances</span><br></pre></td></tr></table></figure><h2 id="什么是复式记账法">什么是复式记账法?</h2><p>复式记账法是以资产与权益平衡关系作为记账基础,对于每一笔经济业务,都要以相等的金额在两个或两个以上相互联系的账户中进行登记,系统地反映资金运动变化结果的一种记账方法。</p><p>复式记账是对每一项经济业务通过两个或两个以上有关账户相互联系起来进行登记的一种专门方法。任何一项经济活动都会引起资金的增减变动或财务收支的变动。</p><p>以上内容来自<a href="https://baike.baidu.com/item/%E5%A4%8D%E5%BC%8F%E8%AE%B0%E8%B4%A6/10359133?fr=aladdin" target="_blank" rel="noopener">百度百科</a>。</p><h2 id="如何记账">如何记账</h2><p>当前账本的交易记录主要分为三种:记录收益,记录支出,结余调整。下面分别展开进行介绍。</p><h3 id="如何记录收益">如何记录收益</h3><p>我们首先记录一下收入情况,我们将公司<code>CompanyA</code>和公司<code>CompanyB</code>的薪水转移到资产<code>Assets:Bank:China:CCB:CardXXX1</code>中,这个资产定义的是我的银行卡。双引号中间的内容是注释性说明。要确保转移数值平衡,即相加为 0 。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">2019-06-21 * "CompanyA" "Salary"</span><br><span class="line"> Assets:Bank:China:CCB:CardXXX1 13000.00 CNY</span><br><span class="line"> Income:China:CompanyA:Salary -13000.00 CNY</span><br><span class="line"></span><br><span class="line">2019-06-18 * "CompanyB" "Salary"</span><br><span class="line"> Assets:Bank:China:CCB:CardXXX1 9000.00 CNY</span><br><span class="line"> Income:China:CompanyB:Salary -9000.00 CNY</span><br></pre></td></tr></table></figure><p>以上内容可以直接写到<code>.bean</code>文件中。</p><h3 id="如何记录消费">如何记录消费</h3><p>记录消费情况和记录收益情况类似,但是要注意资产转移的方向,即数值的正负号。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line">2019-04-17 * "储蓄卡" "餐饮(储蓄卡)"</span><br><span class="line"> Assets:Bank:China:CCB:CardXXX1 -35 CNY</span><br><span class="line"> Expenses:Food:Canteen 35 CNY</span><br><span class="line"></span><br><span class="line">2019-04-18 * "储蓄卡" "餐饮(储蓄卡)"</span><br><span class="line"> Assets:Bank:China:CCB:CardXXX1 -5 CNY</span><br><span class="line"> Expenses:Food:Canteen 5 CNY</span><br><span class="line"></span><br><span class="line">2019-04-20 * "储蓄卡" "餐饮 金稻园"</span><br><span class="line"> Assets:Bank:China:CCB:CardXXX1 -283 CNY</span><br><span class="line"> Expenses:Food:Restaurant 283 CNY</span><br><span class="line"></span><br><span class="line">2019-04-20 * "储蓄卡" "水果(储蓄卡)"</span><br><span class="line"> Assets:Bank:China:CCB:CardXXX1 -20.4 CNY</span><br><span class="line"> Expenses:Food:Fruits 20.4 CNY</span><br></pre></td></tr></table></figure><p>以上内容也可以直接写到<code>.bean</code>文件中。</p><h3 id="结余调整">结余调整</h3><p>我们并不能完全记录每一笔收入和支出情况,所以会造成账本资产情况和实际资产情况数值不符。但是对于小数额的差值,我们可以使用结余调整。这样就把差值的资产补回来了。例如:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">2019-01-01 * "结余调整"</span><br><span class="line"> Assets:Bank:China:CCB:CardXXX1 200 CNY</span><br><span class="line"> Equity:Opening-Balances -200 CNY</span><br></pre></td></tr></table></figure><p>上边的意思是,从账户 <code>Equity:Opening-Balances</code> 转给账户 <code>Assets:Bank:China:CCB:CardXXX1</code>。Beancount的规范是使用 <code>Equity:Opening-Balances</code>。<code>Equity:Opening-Balances</code> 是权益类别下面的账户,可以表示没有记录来源的资产。</p><h2 id="beancount-项目目录结构">Beancount 项目目录结构</h2><p>本人认为按照时间顺序记录账本的方法比较方便,所以我目前使用的目录结构如下; <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line">~/Documents/MyBean</span><br><span class="line">├── data</span><br><span class="line">│ ├── 2017.bean</span><br><span class="line">│ ├── 2018.bean</span><br><span class="line">│ └── 2017.bean</span><br><span class="line">├── documents.tmp/</span><br><span class="line">├── Importers</span><br><span class="line">│ ├── __init__.py</span><br><span class="line">│ ├── regexp.py // 原来在位于 beancount/experiments/ingest/regexp.py</span><br><span class="line">│ └── alipay.py</span><br><span class="line">├── configs</span><br><span class="line">│ ├── alipay.config</span><br><span class="line">│ └── wechat.config</span><br><span class="line">├── main.bean</span><br><span class="line">└── strip_blank.py</span><br></pre></td></tr></table></figure></p><ul><li>main.bean:主要记录账户信息,包括 Assets,Liabilities,Equity,Expenses,Income 各类账户。其次使用 <code>include</code> 命令包含其他账本文件(<code>.bean</code>);</li><li>data/:按照时间顺序存放收入和交易记录的账本文件(<code>.bean</code>);</li><li>documents.tmp/:用于存放支付宝和微信的下载的交易记录文件(<code>.csv</code>);</li><li>Importers/:用于存放自定义的导入脚本;</li><li>configs/:xxxx.config 文件负责定义如何阅读并提取csv账单文件;</li><li>strip_blank.py:删除 csv 文件中的所有多余空格的脚本;</li></ul><p>当然也有其他的目录结构,如 <a href="https://yuchi.me/post/beancount-intro/" target="_blank" rel="noopener">blog</a> 中提到的:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line">~/Documents/accounting</span><br><span class="line">├── documents</span><br><span class="line">│ ├── Assets/</span><br><span class="line">│ ├── Expenses/</span><br><span class="line">│ ├── Income/</span><br><span class="line">│ └── Liabilities/</span><br><span class="line">├── documents.tmp/</span><br><span class="line">├── importers</span><br><span class="line">│ ├── __init__.py</span><br><span class="line">├── yc.bean</span><br><span class="line">└── yc.import</span><br></pre></td></tr></table></figure><h2 id="如何在主文件下包含其他-bean-文件">如何在主文件下包含其他 bean 文件</h2><p>在上个章节--Beancount 项目目录结构--中,我们按照时间顺序存放收入和交易记录的账本文件(<code>.bean</code>),例如:2017.bean,2018.bean,2019.bean,那我们如何在主文件中导入这些子文件呢?可以使用 <code>include</code> 命令,如下:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">* Include</span><br><span class="line"></span><br><span class="line">include "data/2017.bean"</span><br><span class="line">include "data/2018.bean"</span><br><span class="line">include "data/2019.bean"</span><br></pre></td></tr></table></figure><p>如果我们想把工资收入情况做单独的记录,那么可以单独建立一个 <code>Income.bean</code> 文件,然后在使用 <code>include</code> 命令包含进来。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">include "Income/Income.bean"</span><br></pre></td></tr></table></figure><h2 id="使用-csv-账单文件生成流水.bean文件">使用 CSV 账单文件生成流水<code>.bean</code>文件</h2><p>我们时间和精力有限,所以并不能手工记录每一次交易情况。为了方便生成交易账单,我们可以下载支付宝、微信、银行等交易记录,并且使用程序将他们转化为账单文件(<code>.bean</code>)。这样节省了很多时间,并且记录准确。</p><h3 id="bean-extract-命令">bean-extract 命令</h3><p><code>bean-extract</code> 命令: 从每个文件中提取交易和日期。这会生成一些 Beancount 输入文本,这些文本(<code>.bean</code>)文件移动到您的输入文件中;</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">bean-extract blais.config ~/Downloads</span><br></pre></td></tr></table></figure><h3 id="支付宝账单处理过程">支付宝账单处理过程</h3><p>可以参考 <a href="http://lidongchao.com/2018/07/20/has_header_in_csv_Sniffer/" target="_blank" rel="noopener">blog</a>。</p><ol type="1"><li>先把 csv 使用 wps 转换为 xls;</li><li><p>在使用 pandas 将 xls 转换为 utf-8 格式的 csv; <figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> pandas <span class="keyword">as</span> pd</span><br><span class="line">data_xls = pd.read_excel(<span class="string">'alipay_record_20190712_2003_1.xls'</span>, index_col=<span class="number">0</span>) </span><br><span class="line">data_xls.to_csv(<span class="string">'alipay_tmp.csv'</span>, encoding=<span class="string">'utf-8'</span>)</span><br></pre></td></tr></table></figure></p></li><li>最后去除首尾的非数据信息;</li><li><p>使用 strip_blank.py 删除文件中的所有多余空格; <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">python strip_blank.py alipay_tmp.csv > alipay.csv</span><br></pre></td></tr></table></figure></p></li><li><p>使用bean-extract提取beancount数据。 <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">bean-extract my_alipay.config alipay.csv > data_alipay.beancount</span><br></pre></td></tr></table></figure></p></li></ol><h2 id="atom-beancount-语法高亮工具">Atom Beancount 语法高亮工具</h2><p>如果你使用 Atom 打开 beancount,可以安装 language-beancount,这个库可以高亮 beancount 的语法。</p><img src="/2019/07/13/Beancount-01/language-beancount.png" title="高亮 beancount 的语法"><figure><img src="language-beancount.png" alt="language-beancount"><figcaption>language-beancount</figcaption></figure><h2 id="fava-使用技巧">fava 使用技巧</h2><p>https://beancount.github.io/fava/index.html</p><p>web端使用fava,可以远程访问。</p><p>可以使用如下命令,指定IP和端口号: https://github.com/beancount/fava/blob/master/contrib/deployment.rst <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">fava --host localhost --port 5000 --prefix /fava /path/to/your/main.beancount</span><br></pre></td></tr></table></figure></p><hr><h2 id="beancount-相关资料介绍">Beancount 相关资料介绍</h2><h3 id="官方资料">官方资料:</h3><p><a href="http://furius.ca/beancount/" target="_blank" rel="noopener">Beancount官方网站</a></p><p><a href="http://furius.ca/beancount/doc/index" target="_blank" rel="noopener">Beancount官方文档</a></p><p><a href="https://groups.google.com/forum/#!forum/beancount" target="_blank" rel="noopener">Beancount邮件列表</a></p><p><a href="https://bitbucket.org/blais/beancount/src/default/" target="_blank" rel="noopener">Beancount 官方代码库 bitbucket</a></p><p><a href="https://github.com/beancount/beancount" target="_blank" rel="noopener">Beancount github</a></p><p><a href="https://beancount.github.io/fava/" target="_blank" rel="noopener">Fava</a> 是 Beancount 的 web 界面,非常友好。</p><p><a href="https://github.com/beancount/fava" target="_blank" rel="noopener">Fave github</a></p><h3 id="强烈推荐一下博客">强烈推荐一下博客:</h3><p><a href="https://www.byvoid.com/zhs/blog/beancount-bookkeeping-1" target="_blank" rel="noopener">byvoid blog</a> 该博客介绍的非常系统</p><p><a href="http://lidongchao.com/2018/07/20/has_header_in_csv_Sniffer/" target="_blank" rel="noopener">Beancount使用经验</a> 该博客介绍了通过Beancount导入支付宝csv账单的方法</p><p><a href="https://yuchi.me/post/beancount-intro/" target="_blank" rel="noopener">beancount 简易入门指南</a></p><p><a href="https://github.com/lidongchao/BeancountSample" target="_blank" rel="noopener">lidongchao/BeancountSample</a> 这里包含一些代码,可以用于导入 csv 账单到 Beancount 中。</p><h3 id="其他介绍文章">其他介绍文章</h3><p><a href="https://wzyboy.im/post/1063.html" target="_blank" rel="noopener">Beancount —— 命令行复式簿记</a></p><p><a href="http://morefreeze.github.io/2016/10/beancount-thinking.html" target="_blank" rel="noopener">beancount 起步</a></p><p><a href="http://freelancer-x.com/82/%E5%9F%BA%E7%A1%80%E8%AE%A4%E8%AF%86%EF%BD%9C%E5%88%A9%E7%94%A8-beancount-%E6%89%93%E9%80%A0%E4%B8%AA%E4%BA%BA%E7%9A%84%E8%AE%B0%E8%B4%A6%E7%B3%BB%E7%BB%9F%EF%BC%881%EF%BC%89/" target="_blank" rel="noopener">利用 Beancount 打造个人的记账系统</a></p>]]></content>
<summary type="html">
复式记账 Beancount 使用
</summary>
<category term="Tools" scheme="https://www.starlg.cn/categories/Tools/"/>
<category term="Beancount" scheme="https://www.starlg.cn/tags/Beancount/"/>
<category term="Tools" scheme="https://www.starlg.cn/tags/Tools/"/>
</entry>
<entry>
<title>L2 Normalization</title>
<link href="https://www.starlg.cn/2019/07/10/l2normalization/"/>
<id>https://www.starlg.cn/2019/07/10/l2normalization/</id>
<published>2019-07-10T07:38:15.000Z</published>
<updated>2022-05-30T14:16:36.000Z</updated>
<content type="html"><![CDATA[<p>论文:ParseNet: Looking Wider to See Better,<a href="https://arxiv.org/abs/1506.04579" target="_blank" rel="noopener">link</a></p><h2 id="l2-normalization-layers">L2 Normalization layers</h2><p>这篇语义分割的文章提出使用 <span class="math inline">\(L_2\)</span> Normalization layers。问题提出的结构如下图所示:</p><img src="/2019/07/10/l2normalization/20190110_L2_Normalization_Figure1.png" title="Figure 1"><p>如图3所示,当我们需要组合两个或者更多的特征向量时,它们通常<strong>有不同的尺度和范数</strong>。简单的级联特征导致较差的性能,因为比较大的特征会主导较小的特征。虽然在训练期间,权重可能会相应调整,但需要非常仔细地调整参数,并且依赖于数据集,因此违背了稳健原则。我们发现,通过首先规范每个单独的特征,并学习以不同尺度进行放缩,这使得训练更加稳定,并且可以提高性能。</p><p><span class="math inline">\(L_2\)</span> 范数层不仅在特征组合的时候使用。如上所述,在某些情况下,后期融合也同样有效,但仅在L2归一化的帮助下。例如,如果我们想使用底层的特征去学习分类器,如图3所示,一些特征可能有很大的范数。在没有只是的权重初始化和参数调整的情况下,这非常困难。关于这个策略的一个工作就是使用一个附加的卷积层,并且使用多级微调,例如底层使用更小的学习率。这违反了简单和鲁棒的原则。在这篇论文的工作中,对分类之前的特征的每个通道,作者使用了<span class="math inline">\(L_2\)</span>-norm并且学习了缩放参数,这导致了更加稳定的训练。</p><img src="/2019/07/10/l2normalization/20190110_L2_Normalization_Figure3.png" title="Figure 3: 来自4个不同层的特征的激活,这些激活明显有不同的尺度。每一种颜色对饮给一个不同层的特征。蓝色和蓝绿色有着相似是尺度,红色和绿色的特征相比小了2个数量级。"><p>对于一个d维的输入 <span class="math inline">\(\mathbf{x}=(x_1, ..., x_d)\)</span>,我们使用 <span class="math inline">\(L_2\)</span>-norm 规范它,即 <span class="math inline">\(\hat{x}=\frac{x}{\lVert x \rVert_2}\)</span>,其中 <span class="math inline">\(\lVert x \rVert_2=(\sum_{i=1}^{d} {\lvert x_i \rvert}^2)^{1/2}\)</span> 是 <span class="math inline">\(\mathbf{x}\)</span> 的 <span class="math inline">\(L_2\)</span> 范数。</p><p>请注意,如果我们不相应地缩放它,只简单地规范化层的每个输入会改变层的尺度,将会减慢学习速度。例如,我们尝试规范化功能 s.t. <span class="math inline">\(L_2\)</span>-norm 是1,但我们很难训练网络,因为特征变得非常小。 但是,如果我们将其规范化为,例如 10 或 20,网络开始较好的学习。在 batch normalization 和 PReLU 的推动下,我们为每个通道引入缩放参数 <span class="math inline">\(\gamma_i\)</span>,它缩放了归一化的值 <span class="math inline">\(y_i=\gamma_i \hat{x}_i\)</span>。</p><p>额外参数的数量等于通道的总数,并且可以忽略不计,并且可以通过反向传播来学习。 实际上,通过设置 <span class="math inline">\(\gamma_i={\lVert x_i \rVert}^2\)</span>,我们可以恢复 <span class="math inline">\(L_2\)</span> 归一化的特征。这很容易实现,因为规范化和缩放参数学习仅依赖于每个输入特征向量,并且不需要像批量规范化那样聚合来自其他样本的信息。在训练期间,我们使用反向传播和链规则来计算关于缩放银子 <span class="math inline">\(\gamma\)</span> 和输入数据 <span class="math inline">\(\mathbf{x}\)</span> 的导数。</p><h2 id="pytorch-code">Pytorch Code</h2><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> torch.nn.functional <span class="keyword">as</span> F</span><br><span class="line">x = F.normalize(x, p=<span class="number">2</span>, dim=<span class="number">1</span>)</span><br></pre></td></tr></table></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> torch</span><br><span class="line"><span class="keyword">import</span> torch.nn.functional <span class="keyword">as</span> F</span><br><span class="line"></span><br><span class="line">In [<span class="number">54</span>]: x = torch.randn((<span class="number">1</span>, <span class="number">1</span>, <span class="number">10</span>)) </span><br><span class="line"></span><br><span class="line">In [<span class="number">55</span>]: out = F.normalize(x, p=<span class="number">2</span>, dim=<span class="number">2</span>) </span><br><span class="line"></span><br><span class="line">In [<span class="number">56</span>]: out </span><br><span class="line">Out[<span class="number">56</span>]:</span><br><span class="line">tensor([[[ <span class="number">0.2941</span>, <span class="number">-0.3471</span>, <span class="number">-0.0732</span>, <span class="number">0.0674</span>, <span class="number">-0.3557</span>, <span class="number">-0.1949</span>, <span class="number">0.6813</span>,</span><br><span class="line"> <span class="number">-0.1356</span>, <span class="number">-0.0153</span>, <span class="number">-0.3686</span>]]])</span><br><span class="line"></span><br><span class="line">In [<span class="number">57</span>]: x / torch.sqrt((x**<span class="number">2</span>).sum(<span class="number">2</span>)) </span><br><span class="line">Out[<span class="number">57</span>]:</span><br><span class="line">tensor([[[ <span class="number">0.2941</span>, <span class="number">-0.3471</span>, <span class="number">-0.0732</span>, <span class="number">0.0674</span>, <span class="number">-0.3557</span>, <span class="number">-0.1949</span>, <span class="number">0.6813</span>,</span><br><span class="line"> <span class="number">-0.1356</span>, <span class="number">-0.0153</span>, <span class="number">-0.3686</span>]]])</span><br></pre></td></tr></table></figure>]]></content>
<summary type="html">
L2 Normalization Layer
</summary>
<category term="Deep Learning" scheme="https://www.starlg.cn/categories/Deep-Learning/"/>
<category term="Deep Learning" scheme="https://www.starlg.cn/tags/Deep-Learning/"/>
</entry>
<entry>
<title>Pandas Tutorial</title>
<link href="https://www.starlg.cn/2019/06/23/Pandas-Tutorial/"/>
<id>https://www.starlg.cn/2019/06/23/Pandas-Tutorial/</id>
<published>2019-06-23T07:27:41.000Z</published>
<updated>2022-05-30T14:16:36.000Z</updated>
<content type="html"><![CDATA[<h2 id="minutes-to-pandas">10 Minutes to pandas</h2><p><a href="https://pandas.pydata.org/pandas-docs/stable/10min.html" target="_blank" rel="noopener">本文原网址</a></p><p>导入所需要的包。 <figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">In [<span class="number">1</span>]: <span class="keyword">import</span> pandas <span class="keyword">as</span> pd</span><br><span class="line">In [<span class="number">2</span>]: <span class="keyword">import</span> numpy <span class="keyword">as</span> np</span><br><span class="line">In [<span class="number">3</span>]: <span class="keyword">import</span> matplotlib.pyplot <span class="keyword">as</span> plt</span><br></pre></td></tr></table></figure></p><h3 id="目标创建">目标创建</h3><p>通过传递一个列表创建 <code>Series</code>,让pandas创建一个默认的整型索引: <figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line">In [<span class="number">4</span>]: s = pd.Series([<span class="number">1</span>,<span class="number">3</span>,<span class="number">5</span>,np.nan,<span class="number">6</span>,<span class="number">8</span>])</span><br><span class="line"></span><br><span class="line">In [<span class="number">5</span>]: s</span><br><span class="line">Out[<span class="number">5</span>]:</span><br><span class="line"><span class="number">0</span> <span class="number">1.0</span></span><br><span class="line"><span class="number">1</span> <span class="number">3.0</span></span><br><span class="line"><span class="number">2</span> <span class="number">5.0</span></span><br><span class="line"><span class="number">3</span> NaN</span><br><span class="line"><span class="number">4</span> <span class="number">6.0</span></span><br><span class="line"><span class="number">5</span> <span class="number">8.0</span></span><br><span class="line">dtype: float64</span><br></pre></td></tr></table></figure></p><p>通过传递一个<code>Numpy</code>数组创建一个<code>DataFrame</code>数据,用时间和有标签的列作为索引: <figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br></pre></td><td class="code"><pre><span class="line">In [<span class="number">6</span>]: dates = pd.date_range(<span class="string">'20130101'</span>, periods=<span class="number">6</span>)</span><br><span class="line"></span><br><span class="line">In [<span class="number">7</span>]: dates</span><br><span class="line">Out[<span class="number">7</span>]:</span><br><span class="line">DatetimeIndex([<span class="string">'2013-01-01'</span>, <span class="string">'2013-01-02'</span>, <span class="string">'2013-01-03'</span>, <span class="string">'2013-01-04'</span>,</span><br><span class="line"> <span class="string">'2013-01-05'</span>, <span class="string">'2013-01-06'</span>],</span><br><span class="line"> dtype=<span class="string">'datetime64[ns]'</span>, freq=<span class="string">'D'</span>)</span><br><span class="line"></span><br><span class="line">In [<span class="number">8</span>]: df = pd.DataFrame(np.random.randn(<span class="number">6</span>,<span class="number">4</span>), index=dates, columns=list(<span class="string">'ABCD'</span>))</span><br><span class="line"></span><br><span class="line">In [<span class="number">9</span>]: df</span><br><span class="line">Out[<span class="number">9</span>]:</span><br><span class="line"> A B C D</span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-01</span> <span class="number">0.469112</span> <span class="number">-0.282863</span> <span class="number">-1.509059</span> <span class="number">-1.135632</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-02</span> <span class="number">1.212112</span> <span class="number">-0.173215</span> <span class="number">0.119209</span> <span class="number">-1.044236</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-03</span> <span class="number">-0.861849</span> <span class="number">-2.104569</span> <span class="number">-0.494929</span> <span class="number">1.071804</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-04</span> <span class="number">0.721555</span> <span class="number">-0.706771</span> <span class="number">-1.039575</span> <span class="number">0.271860</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-05</span> <span class="number">-0.424972</span> <span class="number">0.567020</span> <span class="number">0.276232</span> <span class="number">-1.087401</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-06</span> <span class="number">-0.673690</span> <span class="number">0.113648</span> <span class="number">-1.478427</span> <span class="number">0.524988</span></span><br></pre></td></tr></table></figure></p><p>通过传递一个序列对象的字典创建<code>DataFrame</code>。 <figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line">In [<span class="number">10</span>]: df2 = pd.DataFrame({ <span class="string">'A'</span> : <span class="number">1.</span>,</span><br><span class="line"> ....: <span class="string">'B'</span> : pd.Timestamp(<span class="string">'20130102'</span>),</span><br><span class="line"> ....: <span class="string">'C'</span> : pd.Series(<span class="number">1</span>,index=list(range(<span class="number">4</span>)),dtype=<span class="string">'float32'</span>),</span><br><span class="line"> ....: <span class="string">'D'</span> : np.array([<span class="number">3</span>] * <span class="number">4</span>,dtype=<span class="string">'int32'</span>),</span><br><span class="line"> ....: <span class="string">'E'</span> : pd.Categorical([<span class="string">"test"</span>,<span class="string">"train"</span>,<span class="string">"test"</span>,<span class="string">"train"</span>]),</span><br><span class="line"> ....: <span class="string">'F'</span> : <span class="string">'foo'</span> })</span><br><span class="line"> ....:</span><br><span class="line"></span><br><span class="line">In [<span class="number">11</span>]: df2</span><br><span class="line">Out[<span class="number">11</span>]:</span><br><span class="line"> A B C D E F</span><br><span class="line"><span class="number">0</span> <span class="number">1.0</span> <span class="number">2013</span><span class="number">-01</span><span class="number">-02</span> <span class="number">1.0</span> <span class="number">3</span> test foo</span><br><span class="line"><span class="number">1</span> <span class="number">1.0</span> <span class="number">2013</span><span class="number">-01</span><span class="number">-02</span> <span class="number">1.0</span> <span class="number">3</span> train foo</span><br><span class="line"><span class="number">2</span> <span class="number">1.0</span> <span class="number">2013</span><span class="number">-01</span><span class="number">-02</span> <span class="number">1.0</span> <span class="number">3</span> test foo</span><br><span class="line"><span class="number">3</span> <span class="number">1.0</span> <span class="number">2013</span><span class="number">-01</span><span class="number">-02</span> <span class="number">1.0</span> <span class="number">3</span> train foo</span><br></pre></td></tr></table></figure></p><p>得到的<code>DataFrame</code>的列有不同的类型: <figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">In [<span class="number">12</span>]: df2.dtypes</span><br><span class="line">Out[<span class="number">12</span>]:</span><br><span class="line">A float64</span><br><span class="line">B datetime64[ns]</span><br><span class="line">C float32</span><br><span class="line">D int32</span><br><span class="line">E category</span><br><span class="line">F object</span><br><span class="line">dtype: object</span><br></pre></td></tr></table></figure></p><h3 id="浏览数据">浏览数据</h3><p>可以看<a href="https://pandas.pydata.org/pandas-docs/stable/basics.html#basics" target="_blank" rel="noopener">基本章节</a>。</p><p>这里我们查看一下frame的前几行和后几行: <figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line">In [<span class="number">14</span>]: df.head()</span><br><span class="line">Out[<span class="number">14</span>]:</span><br><span class="line"> A B C D</span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-01</span> <span class="number">0.469112</span> <span class="number">-0.282863</span> <span class="number">-1.509059</span> <span class="number">-1.135632</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-02</span> <span class="number">1.212112</span> <span class="number">-0.173215</span> <span class="number">0.119209</span> <span class="number">-1.044236</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-03</span> <span class="number">-0.861849</span> <span class="number">-2.104569</span> <span class="number">-0.494929</span> <span class="number">1.071804</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-04</span> <span class="number">0.721555</span> <span class="number">-0.706771</span> <span class="number">-1.039575</span> <span class="number">0.271860</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-05</span> <span class="number">-0.424972</span> <span class="number">0.567020</span> <span class="number">0.276232</span> <span class="number">-1.087401</span></span><br><span class="line"></span><br><span class="line">In [<span class="number">15</span>]: df.tail(<span class="number">3</span>)</span><br><span class="line">Out[<span class="number">15</span>]:</span><br><span class="line"> A B C D</span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-04</span> <span class="number">0.721555</span> <span class="number">-0.706771</span> <span class="number">-1.039575</span> <span class="number">0.271860</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-05</span> <span class="number">-0.424972</span> <span class="number">0.567020</span> <span class="number">0.276232</span> <span class="number">-1.087401</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-06</span> <span class="number">-0.673690</span> <span class="number">0.113648</span> <span class="number">-1.478427</span> <span class="number">0.524988</span></span><br></pre></td></tr></table></figure></p><p>显示索引和列,并且显示隐含的NumPy数据: <figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br></pre></td><td class="code"><pre><span class="line">In [<span class="number">16</span>]: df.index</span><br><span class="line">Out[<span class="number">16</span>]:</span><br><span class="line">DatetimeIndex([<span class="string">'2013-01-01'</span>, <span class="string">'2013-01-02'</span>, <span class="string">'2013-01-03'</span>, <span class="string">'2013-01-04'</span>,</span><br><span class="line"> <span class="string">'2013-01-05'</span>, <span class="string">'2013-01-06'</span>],</span><br><span class="line"> dtype=<span class="string">'datetime64[ns]'</span>, freq=<span class="string">'D'</span>)</span><br><span class="line"></span><br><span class="line">In [<span class="number">17</span>]: df.columns</span><br><span class="line">Out[<span class="number">17</span>]: Index([<span class="string">'A'</span>, <span class="string">'B'</span>, <span class="string">'C'</span>, <span class="string">'D'</span>], dtype=<span class="string">'object'</span>)</span><br><span class="line"></span><br><span class="line">In [<span class="number">18</span>]: df.values</span><br><span class="line">Out[<span class="number">18</span>]:</span><br><span class="line">array([[ <span class="number">0.4691</span>, <span class="number">-0.2829</span>, <span class="number">-1.5091</span>, <span class="number">-1.1356</span>],</span><br><span class="line"> [ <span class="number">1.2121</span>, <span class="number">-0.1732</span>, <span class="number">0.1192</span>, <span class="number">-1.0442</span>],</span><br><span class="line"> [<span class="number">-0.8618</span>, <span class="number">-2.1046</span>, <span class="number">-0.4949</span>, <span class="number">1.0718</span>],</span><br><span class="line"> [ <span class="number">0.7216</span>, <span class="number">-0.7068</span>, <span class="number">-1.0396</span>, <span class="number">0.2719</span>],</span><br><span class="line"> [<span class="number">-0.425</span> , <span class="number">0.567</span> , <span class="number">0.2762</span>, <span class="number">-1.0874</span>],</span><br><span class="line"> [<span class="number">-0.6737</span>, <span class="number">0.1136</span>, <span class="number">-1.4784</span>, <span class="number">0.525</span> ]])</span><br></pre></td></tr></table></figure></p><p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html#pandas.DataFrame.describe" target="_blank" rel="noopener">describe()</a>显示一个快速的你的数据的统计信息: <figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line">In [<span class="number">19</span>]: df.describe()</span><br><span class="line">Out[<span class="number">19</span>]:</span><br><span class="line"> A B C D</span><br><span class="line">count <span class="number">6.000000</span> <span class="number">6.000000</span> <span class="number">6.000000</span> <span class="number">6.000000</span></span><br><span class="line">mean <span class="number">0.073711</span> <span class="number">-0.431125</span> <span class="number">-0.687758</span> <span class="number">-0.233103</span></span><br><span class="line">std <span class="number">0.843157</span> <span class="number">0.922818</span> <span class="number">0.779887</span> <span class="number">0.973118</span></span><br><span class="line">min <span class="number">-0.861849</span> <span class="number">-2.104569</span> <span class="number">-1.509059</span> <span class="number">-1.135632</span></span><br><span class="line"><span class="number">25</span>% <span class="number">-0.611510</span> <span class="number">-0.600794</span> <span class="number">-1.368714</span> <span class="number">-1.076610</span></span><br><span class="line"><span class="number">50</span>% <span class="number">0.022070</span> <span class="number">-0.228039</span> <span class="number">-0.767252</span> <span class="number">-0.386188</span></span><br><span class="line"><span class="number">75</span>% <span class="number">0.658444</span> <span class="number">0.041933</span> <span class="number">-0.034326</span> <span class="number">0.461706</span></span><br><span class="line">max <span class="number">1.212112</span> <span class="number">0.567020</span> <span class="number">0.276232</span> <span class="number">1.071804</span></span><br></pre></td></tr></table></figure></p><p>转置你的数据: <figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">In [<span class="number">20</span>]: df.T</span><br><span class="line">Out[<span class="number">20</span>]:</span><br><span class="line"> <span class="number">2013</span><span class="number">-01</span><span class="number">-01</span> <span class="number">2013</span><span class="number">-01</span><span class="number">-02</span> <span class="number">2013</span><span class="number">-01</span><span class="number">-03</span> <span class="number">2013</span><span class="number">-01</span><span class="number">-04</span> <span class="number">2013</span><span class="number">-01</span><span class="number">-05</span> <span class="number">2013</span><span class="number">-01</span><span class="number">-06</span></span><br><span class="line">A <span class="number">0.469112</span> <span class="number">1.212112</span> <span class="number">-0.861849</span> <span class="number">0.721555</span> <span class="number">-0.424972</span> <span class="number">-0.673690</span></span><br><span class="line">B <span class="number">-0.282863</span> <span class="number">-0.173215</span> <span class="number">-2.104569</span> <span class="number">-0.706771</span> <span class="number">0.567020</span> <span class="number">0.113648</span></span><br><span class="line">C <span class="number">-1.509059</span> <span class="number">0.119209</span> <span class="number">-0.494929</span> <span class="number">-1.039575</span> <span class="number">0.276232</span> <span class="number">-1.478427</span></span><br><span class="line">D <span class="number">-1.135632</span> <span class="number">-1.044236</span> <span class="number">1.071804</span> <span class="number">0.271860</span> <span class="number">-1.087401</span> <span class="number">0.524988</span></span><br></pre></td></tr></table></figure></p><p>通过一个维度进行排序: <figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">In [<span class="number">21</span>]: df.sort_index(axis=<span class="number">1</span>, ascending=<span class="keyword">False</span>)</span><br><span class="line">Out[<span class="number">21</span>]:</span><br><span class="line"> D C B A</span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-01</span> <span class="number">-1.135632</span> <span class="number">-1.509059</span> <span class="number">-0.282863</span> <span class="number">0.469112</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-02</span> <span class="number">-1.044236</span> <span class="number">0.119209</span> <span class="number">-0.173215</span> <span class="number">1.212112</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-03</span> <span class="number">1.071804</span> <span class="number">-0.494929</span> <span class="number">-2.104569</span> <span class="number">-0.861849</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-04</span> <span class="number">0.271860</span> <span class="number">-1.039575</span> <span class="number">-0.706771</span> <span class="number">0.721555</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-05</span> <span class="number">-1.087401</span> <span class="number">0.276232</span> <span class="number">0.567020</span> <span class="number">-0.424972</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-06</span> <span class="number">0.524988</span> <span class="number">-1.478427</span> <span class="number">0.113648</span> <span class="number">-0.673690</span></span><br></pre></td></tr></table></figure></p><p>通过数值排序: <figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">In [<span class="number">22</span>]: df.sort_values(by=<span class="string">'B'</span>)</span><br><span class="line">Out[<span class="number">22</span>]:</span><br><span class="line"> A B C D</span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-03</span> <span class="number">-0.861849</span> <span class="number">-2.104569</span> <span class="number">-0.494929</span> <span class="number">1.071804</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-04</span> <span class="number">0.721555</span> <span class="number">-0.706771</span> <span class="number">-1.039575</span> <span class="number">0.271860</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-01</span> <span class="number">0.469112</span> <span class="number">-0.282863</span> <span class="number">-1.509059</span> <span class="number">-1.135632</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-02</span> <span class="number">1.212112</span> <span class="number">-0.173215</span> <span class="number">0.119209</span> <span class="number">-1.044236</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-06</span> <span class="number">-0.673690</span> <span class="number">0.113648</span> <span class="number">-1.478427</span> <span class="number">0.524988</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-05</span> <span class="number">-0.424972</span> <span class="number">0.567020</span> <span class="number">0.276232</span> <span class="number">-1.087401</span></span><br></pre></td></tr></table></figure></p><h3 id="选择">选择</h3><h3 id="得到数据">得到数据</h3><p>选择一个列,这会产生一个<code>Series</code>, 等同于<code>df.A</code>: <figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">In [<span class="number">23</span>]: df[<span class="string">'A'</span>]</span><br><span class="line">Out[<span class="number">23</span>]:</span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-01</span> <span class="number">0.469112</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-02</span> <span class="number">1.212112</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-03</span> <span class="number">-0.861849</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-04</span> <span class="number">0.721555</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-05</span> <span class="number">-0.424972</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-06</span> <span class="number">-0.673690</span></span><br><span class="line">Freq: D, Name: A, dtype: float64</span><br></pre></td></tr></table></figure></p><p>通过<code>[]</code>进行选择,这可以切开行: <figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line">In [<span class="number">24</span>]: df[<span class="number">0</span>:<span class="number">3</span>]</span><br><span class="line">Out[<span class="number">24</span>]:</span><br><span class="line"> A B C D</span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-01</span> <span class="number">0.469112</span> <span class="number">-0.282863</span> <span class="number">-1.509059</span> <span class="number">-1.135632</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-02</span> <span class="number">1.212112</span> <span class="number">-0.173215</span> <span class="number">0.119209</span> <span class="number">-1.044236</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-03</span> <span class="number">-0.861849</span> <span class="number">-2.104569</span> <span class="number">-0.494929</span> <span class="number">1.071804</span></span><br><span class="line"></span><br><span class="line">In [<span class="number">25</span>]: df[<span class="string">'20130102'</span>:<span class="string">'20130104'</span>]</span><br><span class="line">Out[<span class="number">25</span>]:</span><br><span class="line"> A B C D</span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-02</span> <span class="number">1.212112</span> <span class="number">-0.173215</span> <span class="number">0.119209</span> <span class="number">-1.044236</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-03</span> <span class="number">-0.861849</span> <span class="number">-2.104569</span> <span class="number">-0.494929</span> <span class="number">1.071804</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-04</span> <span class="number">0.721555</span> <span class="number">-0.706771</span> <span class="number">-1.039575</span> <span class="number">0.271860</span></span><br></pre></td></tr></table></figure></p><h3 id="通过标签选择">通过标签选择</h3><p>更多请看<a href="https://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-label" target="_blank" rel="noopener">here</a>。</p><p>使用标签获得一个截面: <figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">In [<span class="number">26</span>]: df.loc[dates[<span class="number">0</span>]]</span><br><span class="line">Out[<span class="number">26</span>]:</span><br><span class="line">A <span class="number">0.469112</span></span><br><span class="line">B <span class="number">-0.282863</span></span><br><span class="line">C <span class="number">-1.509059</span></span><br><span class="line">D <span class="number">-1.135632</span></span><br><span class="line">Name: <span class="number">2013</span><span class="number">-01</span><span class="number">-01</span> <span class="number">00</span>:<span class="number">00</span>:<span class="number">00</span>, dtype: float64</span><br></pre></td></tr></table></figure></p><p>通过标签选择多个轴线: <figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">In [<span class="number">27</span>]: df.loc[:,[<span class="string">'A'</span>,<span class="string">'B'</span>]]</span><br><span class="line">Out[<span class="number">27</span>]:</span><br><span class="line"> A B</span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-01</span> <span class="number">0.469112</span> <span class="number">-0.282863</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-02</span> <span class="number">1.212112</span> <span class="number">-0.173215</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-03</span> <span class="number">-0.861849</span> <span class="number">-2.104569</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-04</span> <span class="number">0.721555</span> <span class="number">-0.706771</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-05</span> <span class="number">-0.424972</span> <span class="number">0.567020</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-06</span> <span class="number">-0.673690</span> <span class="number">0.113648</span></span><br></pre></td></tr></table></figure></p><p>显示一个标签切片,并且也包括结束点: <figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">In [<span class="number">28</span>]: df.loc[<span class="string">'20130102'</span>:<span class="string">'20130104'</span>,[<span class="string">'A'</span>,<span class="string">'B'</span>]]</span><br><span class="line">Out[<span class="number">28</span>]:</span><br><span class="line"> A B</span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-02</span> <span class="number">1.212112</span> <span class="number">-0.173215</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-03</span> <span class="number">-0.861849</span> <span class="number">-2.104569</span></span><br><span class="line"><span class="number">2013</span><span class="number">-01</span><span class="number">-04</span> <span class="number">0.721555</span> <span class="number">-0.706771</span></span><br></pre></td></tr></table></figure></p><h2 id="可视化">可视化</h2><h3 id="基础绘画-plot">基础绘画: plot</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">In [<span class="number">2</span>]: ts = pd.Series(np.random.randn(<span class="number">1000</span>), index=pd.date_range(<span class="string">'1/1/2000'</span>, periods=<span class="number">1000</span>))</span><br><span class="line"></span><br><span class="line">In [<span class="number">3</span>]: ts = ts.cumsum()</span><br><span class="line"></span><br><span class="line">In [<span class="number">4</span>]: ts.plot()</span><br><span class="line">Out[<span class="number">4</span>]: <matplotlib.axes._subplots.AxesSubplot at <span class="number">0x1c2ead5a20</span>></span><br></pre></td></tr></table></figure><figure><img src="./1527073248624.png" alt="Alt text"><figcaption>Alt text</figcaption></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">In [<span class="number">15</span>]: plt.figure();</span><br><span class="line"></span><br><span class="line">In [<span class="number">16</span>]: df.iloc[<span class="number">5</span>].plot.bar(); plt.axhline(<span class="number">0</span>, color=<span class="string">'k'</span>)</span><br><span class="line">Out[<span class="number">16</span>]: <matplotlib.lines.Line2D at <span class="number">0x1c318b4f60</span>></span><br></pre></td></tr></table></figure><figure><img src="./1527073276506.png" alt="Alt text"><figcaption>Alt text</figcaption></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">In [<span class="number">17</span>]: df2 = pd.DataFrame(np.random.rand(<span class="number">10</span>, <span class="number">4</span>), columns=[<span class="string">'a'</span>, <span class="string">'b'</span>, <span class="string">'c'</span>, <span class="string">'d'</span>])</span><br><span class="line"></span><br><span class="line">In [<span class="number">18</span>]: df2.plot.bar();</span><br></pre></td></tr></table></figure><figure><img src="./1527073289369.png" alt="Alt text"><figcaption>Alt text</figcaption></figure><h3 id="直方图histograms">直方图(Histograms)</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">In [<span class="number">21</span>]: df4 = pd.DataFrame({<span class="string">'a'</span>: np.random.randn(<span class="number">1000</span>) + <span class="number">1</span>, <span class="string">'b'</span>: np.random.randn(<span class="number">1000</span>),</span><br><span class="line"> ....: <span class="string">'c'</span>: np.random.randn(<span class="number">1000</span>) - <span class="number">1</span>}, columns=[<span class="string">'a'</span>, <span class="string">'b'</span>, <span class="string">'c'</span>])</span><br><span class="line"> ....:</span><br><span class="line"></span><br><span class="line">In [<span class="number">22</span>]: plt.figure();</span><br><span class="line"></span><br><span class="line">In [<span class="number">23</span>]: df4.plot.hist(alpha=<span class="number">0.5</span>)</span><br><span class="line">Out[<span class="number">23</span>]: <matplotlib.axes._subplots.AxesSubplot at <span class="number">0x1c2f3fb2e8</span>></span><br></pre></td></tr></table></figure><figure><img src="./1527073328778.png" alt="Alt text"><figcaption>Alt text</figcaption></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">In [<span class="number">24</span>]: plt.figure();</span><br><span class="line"></span><br><span class="line">In [<span class="number">25</span>]: df4.plot.hist(stacked=<span class="keyword">True</span>, bins=<span class="number">20</span>)</span><br><span class="line">Out[<span class="number">25</span>]: <matplotlib.axes._subplots.AxesSubplot at <span class="number">0x1233ad2b0</span>></span><br></pre></td></tr></table></figure><figure><img src="./1527073341559.png" alt="Alt text"><figcaption>Alt text</figcaption></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">In [<span class="number">28</span>]: plt.figure();</span><br><span class="line"></span><br><span class="line">In [<span class="number">29</span>]: df[<span class="string">'A'</span>].diff().hist()</span><br><span class="line">Out[<span class="number">29</span>]: <matplotlib.axes._subplots.AxesSubplot at <span class="number">0x1c333967f0</span>></span><br></pre></td></tr></table></figure><figure><img src="./1527073360969.png" alt="Alt text"><figcaption>Alt text</figcaption></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">In [<span class="number">30</span>]: plt.figure()</span><br><span class="line">Out[<span class="number">30</span>]: <Figure size <span class="number">640</span>x480 <span class="keyword">with</span> <span class="number">0</span> Axes></span><br><span class="line"></span><br><span class="line">In [<span class="number">31</span>]: df.diff().hist(color=<span class="string">'k'</span>, alpha=<span class="number">0.5</span>, bins=<span class="number">50</span>)</span><br><span class="line">Out[<span class="number">31</span>]:</span><br><span class="line">array([[<matplotlib.axes._subplots.AxesSubplot object at <span class="number">0x1c2b9669e8</span>>,</span><br><span class="line"> <matplotlib.axes._subplots.AxesSubplot object at <span class="number">0x1c3184a0b8</span>>],</span><br><span class="line"> [<matplotlib.axes._subplots.AxesSubplot object at <span class="number">0x1c2e766668</span>>,</span><br><span class="line"> <matplotlib.axes._subplots.AxesSubplot object at <span class="number">0x1c319e1240</span>>]], dtype=object)</span><br></pre></td></tr></table></figure><figure><img src="./1527073377545.png" alt="Alt text"><figcaption>Alt text</figcaption></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">In [<span class="number">32</span>]: data = pd.Series(np.random.randn(<span class="number">1000</span>))</span><br><span class="line"></span><br><span class="line">In [<span class="number">33</span>]: data.hist(by=np.random.randint(<span class="number">0</span>, <span class="number">4</span>, <span class="number">1000</span>), figsize=(<span class="number">6</span>, <span class="number">4</span>))</span><br><span class="line">Out[<span class="number">33</span>]:</span><br><span class="line">array([[<matplotlib.axes._subplots.AxesSubplot object at <span class="number">0x1c2f245898</span>>,</span><br><span class="line"> <matplotlib.axes._subplots.AxesSubplot object at <span class="number">0x1c2fd204a8</span>>],</span><br><span class="line"> [<matplotlib.axes._subplots.AxesSubplot object at <span class="number">0x1c2f326240</span>>,</span><br><span class="line"> <matplotlib.axes._subplots.AxesSubplot object at <span class="number">0x1c2e751b00</span>>]], dtype=object)</span><br></pre></td></tr></table></figure><figure><img src="./1527073572668.png" alt="Alt text"><figcaption>Alt text</figcaption></figure>]]></content>
<summary type="html">
Pandas Tutorial
</summary>
<category term="Python" scheme="https://www.starlg.cn/categories/Python/"/>
<category term="Python" scheme="https://www.starlg.cn/tags/Python/"/>
<category term="Pandas" scheme="https://www.starlg.cn/tags/Pandas/"/>
</entry>
<entry>
<title>变量——看见社会小趋势</title>
<link href="https://www.starlg.cn/2019/06/20/Book-BianLiang/"/>
<id>https://www.starlg.cn/2019/06/20/Book-BianLiang/</id>
<published>2019-06-20T14:25:21.000Z</published>
<updated>2022-05-30T14:16:36.000Z</updated>
<content type="html"><![CDATA[<h1 id="作者简介">作者简介</h1><p>何帆,男,现任北京大学汇丰商学院经济学教授,兼熵一资本首席经济学家。曾任中国社会科学院世界经济与政治研究所副所长,在政策研究领域研究已经超过20年 [3] ,发表学术论文100多篇,出版专著10余部,如《变量》《何帆大局观》等 。现被厦门大学EMBA管理学院特聘为EMBA讲师,同陆磊教授一同讲授EMBA课程-《宏观经济理论与实践》。同时,何帆是得到App《何帆大局观》《何帆的读书俱乐部》《何帆报告》课程主理人。</p><p>作者简介来自百度百科</p><h1 id="摘抄">摘抄</h1><h2 id="第一章这样观察一棵树">第一章 这样观察一棵树</h2><p>2018年是一个新的开端。生活在2018年的人感受到的是中国经济遇到的各种冲击:中美贸易战、经济增长回落、股市下跌。他们会感到焦虑和担忧。旧路标已经消失,新秩序尚未出现。未来30年出现的一系列变化将挑战我们的认知,但历史从来都是一位“魔术师”,未来会出现意想不到的变化。在这一章,我会讲述如何像细致地观察一棵树一样观察历史,怎样从每年长出的“嫩芽”去判断中国文明这棵大树的生命力。我还会告诉你两个重要的概念:<strong>慢变量</strong>和<strong>小趋势</strong>。感知历史,就要会从慢变量中寻找小趋势。</p><h2 id="第二章在无人地带寻找无人机">第二章 在无人地带寻找无人机</h2><p>2018年,关于技术发展路径的讨论引起全民关注。中国到底是应该集中全力补上“核心技术”,还是应该扬己所长发展“应用技术”呢?我将带你回顾美国在工业革命时期的经验,并试图发现中国在信息化时代的最佳战略。我找到的第二个变量是:<strong>技术赋能</strong>。在创新阶段,寻找新技术的应用场景更重要,在边缘地带更容易找到新技术的应用场景,技术必须与市场需求匹配。我们会到新疆去看无人机,而你很可能会在酒店里邂逅机器人。中国革命的成功靠的是“群众路线”,中国经济的崛起也要走“群众路线”。</p><h2 id="第三章老兵不死">第三章 老兵不死</h2><p>2018年,谁是新兴产业,谁是传统产业?哪个更胜一筹?在过去几年,互联网大军就好像当年来自中亚大草原的游牧民族,兵强马壮,来去如风。在互联网大军的攻势下,传统产业的护城河形同虚设。到了2018年,这股“为跨不破”,精于“降维打击”的大军,却在一座城堡前久攻不下。这就是工业化的代表————已经有上百年历史的汽车行业。2018年,我发现的第三个变量是:<strong>老兵不死</strong>。我要带你到传统制造业的腹地,看看他们是如何抵御互联网行业的迅猛攻势。在这里,你会看到,传统行业的老兵早已经悄悄穿上了新的军装,而新兴的产业正在积极地想传统产业学习。新兴产业和传统产业的边界,也许并没有你想象的那般泾渭分明。</p><h2 id="第四章在菜市场遇见城市设计师">第四章 在菜市场遇见城市设计师</h2><p>2018年,人们最关心的是房价是否会出现拐点,但从长时间来看更值得关注的是城市化的拐点。自上而下的城市化已不可持续。我观察到的第四个变量是:<strong>自下而上的力量浮出水面</strong>。城市化的进程不会停止,未来会有更多的城市圈,但这些都市圈是放大了的城市,还是一种新的城市物种呢?未来的城市不一定都能扩张,假如城市不得不“收缩”,该怎样才能像瘦身一样,瘦了更健康?未来的城市将深受互联网影响,城市空间布局会跟过去有很大的不同。“位置、位置、位置”的传统房地产“金律”很可能不再适用。我们会看到,城市会爆发一场“颜值革命”。这场“颜值革命”来自哪里呢?归根到底,它来自人民群众自己创造美好生活的能量。</p><h2 id="第五章阿那亚和范家小学">第五章 阿那亚和范家小学</h2><p>2018年,我们听到了很多负面的社会新闻:米脂杀人、衡阳装车、高铁霸座......这个社会变得越来越糟糕了吗?其中这是一种误解。虽然从表明上看,有些人只关心自我私利,但大家对集体生活的向往并没有泯灭。中国人已经意识到,只有重建集体生活,才能更好地发现自我。我看到的第五个变量就是:<strong>重建社群</strong>。有哪些地方的人们正在“凝结”起来,形成新的社群?这些新的社群只是孤岛,还是将成为群岛?培养孩子也需要一个社群。我会带你到一所偏僻的农村小学看看。2018年,我找到的中国教育理念最先进的小学不是北京或上海名校,而是山区里的一所农村小学。你不必吃惊,社会发展的剧情经常会有令人意想不到的转变。</p>]]></content>
<summary type="html">
变量——看见社会小趋势
</summary>
<category term="Book" scheme="https://www.starlg.cn/categories/Book/"/>
<category term="Book" scheme="https://www.starlg.cn/tags/Book/"/>
</entry>
<entry>
<title>High-level Semantic Feature Detection A New Perspective for Pedestrian Detection</title>
<link href="https://www.starlg.cn/2019/05/29/CVPR2019-CSP/"/>
<id>https://www.starlg.cn/2019/05/29/CVPR2019-CSP/</id>
<published>2019-05-29T02:16:27.000Z</published>
<updated>2022-05-30T14:16:36.000Z</updated>
<content type="html"><![CDATA[<p><a href="https://arxiv.org/abs/1904.02948v1" target="_blank" rel="noopener">Paper Link</a></p><p><a href="https://github.com/liuwei16/CSP" target="_blank" rel="noopener">Code</a></p><h1 id="简介">简介</h1><p>CSP: Center and Scale Prediction</p><figure><img src="./cVPR19_CSP_pipeline.png" alt="CVPR19_CSP_pipeline"><figcaption>CVPR19_CSP_pipeline</figcaption></figure><figure><img src="./20190529_CVPR19_CSP_architecture.png" alt="20190529_CVPR19_CSP_architecture"><figcaption>20190529_CVPR19_CSP_architecture</figcaption></figure><figure><img src="./20190529_CVPR19_CSP_annotations.png" alt="20190529_CVPR19_CSP_annotations"><figcaption>20190529_CVPR19_CSP_annotations</figcaption></figure><h1 id="方法">方法</h1><h1 id="实验">实验</h1><figure><img src="./20190529_CVPR19_CSP_points.png" alt="20190529_CVPR19_CSP_points"><figcaption>20190529_CVPR19_CSP_points</figcaption></figure><figure><img src="./20190529_CVPR19_CSP_scale.png" alt="20190529_CVPR19_CSP_scale"><figcaption>20190529_CVPR19_CSP_scale</figcaption></figure><figure><img src="./20190529_CVPR19_CSP_downsampling_factors.png" alt="20190529_CVPR19_CSP_downsampling_factors"><figcaption>20190529_CVPR19_CSP_downsampling_factors</figcaption></figure><figure><img src="./20190529_CVPR19_CSP_multi_scale.png" alt="20190529_CVPR19_CSP_multi_scale"><figcaption>20190529_CVPR19_CSP_multi_scale</figcaption></figure><figure><img src="./20190529_CVPR19_CSP_Caltech_new.png" alt="20190529_CVPR19_CSP_Caltech_new"><figcaption>20190529_CVPR19_CSP_Caltech_new</figcaption></figure><figure><img src="./20190529_CVPR19_CSP_CityPersons.png" alt="20190529_CVPR19_CSP_CityPersons"><figcaption>20190529_CVPR19_CSP_CityPersons</figcaption></figure><h1 id="代码">代码</h1><h2 id="准备ground-truth">准备ground truth</h2><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">calc_gt_center</span><span class="params">(C, img_data,r=<span class="number">2</span>, down=<span class="number">4</span>,scale=<span class="string">'h'</span>,offset=True)</span>:</span></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">gaussian</span><span class="params">(kernel)</span>:</span></span><br><span class="line">sigma = ((kernel<span class="number">-1</span>) * <span class="number">0.5</span> - <span class="number">1</span>) * <span class="number">0.3</span> + <span class="number">0.8</span></span><br><span class="line">s = <span class="number">2</span>*(sigma**<span class="number">2</span>)</span><br><span class="line">dx = np.exp(-np.square(np.arange(kernel) - int(kernel / <span class="number">2</span>)) / s)</span><br><span class="line"><span class="keyword">return</span> np.reshape(dx,(<span class="number">-1</span>,<span class="number">1</span>))</span><br><span class="line">gts = np.copy(img_data[<span class="string">'bboxes'</span>])</span><br><span class="line">igs = np.copy(img_data[<span class="string">'ignoreareas'</span>])</span><br><span class="line">scale_map = np.zeros((int(C.size_train[<span class="number">0</span>]/down), int(C.size_train[<span class="number">1</span>]/down), <span class="number">2</span>))</span><br><span class="line"><span class="keyword">if</span> scale==<span class="string">'hw'</span>:</span><br><span class="line">scale_map = np.zeros((int(C.size_train[<span class="number">0</span>] / down), int(C.size_train[<span class="number">1</span>] / down), <span class="number">3</span>))</span><br><span class="line"><span class="keyword">if</span> offset:</span><br><span class="line">offset_map = np.zeros((int(C.size_train[<span class="number">0</span>] / down), int(C.size_train[<span class="number">1</span>] / down), <span class="number">3</span>))</span><br><span class="line">seman_map = np.zeros((int(C.size_train[<span class="number">0</span>]/down), int(C.size_train[<span class="number">1</span>]/down), <span class="number">3</span>))</span><br><span class="line">seman_map[:,:,<span class="number">1</span>] = <span class="number">1</span></span><br><span class="line"><span class="keyword">if</span> len(igs) > <span class="number">0</span>:</span><br><span class="line">igs = igs/down</span><br><span class="line"><span class="keyword">for</span> ind <span class="keyword">in</span> range(len(igs)):</span><br><span class="line">x1,y1,x2,y2 = int(igs[ind,<span class="number">0</span>]), int(igs[ind,<span class="number">1</span>]), int(np.ceil(igs[ind,<span class="number">2</span>])), int(np.ceil(igs[ind,<span class="number">3</span>]))</span><br><span class="line">seman_map[y1:y2, x1:x2,<span class="number">1</span>] = <span class="number">0</span> <span class="comment"># 被忽视的区域在第1个通道上置0</span></span><br><span class="line"><span class="keyword">if</span> len(gts)><span class="number">0</span>:</span><br><span class="line">gts = gts/down</span><br><span class="line"><span class="keyword">for</span> ind <span class="keyword">in</span> range(len(gts)):</span><br><span class="line"><span class="comment"># x1, y1, x2, y2 = int(round(gts[ind, 0])), int(round(gts[ind, 1])), int(round(gts[ind, 2])), int(round(gts[ind, 3]))</span></span><br><span class="line">x1, y1, x2, y2 = int(np.ceil(gts[ind, <span class="number">0</span>])), int(np.ceil(gts[ind, <span class="number">1</span>])), int(gts[ind, <span class="number">2</span>]), int(gts[ind, <span class="number">3</span>])</span><br><span class="line">c_x, c_y = int((gts[ind, <span class="number">0</span>] + gts[ind, <span class="number">2</span>]) / <span class="number">2</span>), int((gts[ind, <span class="number">1</span>] + gts[ind, <span class="number">3</span>]) / <span class="number">2</span>)</span><br><span class="line">dx = gaussian(x2-x1)</span><br><span class="line">dy = gaussian(y2-y1)</span><br><span class="line">gau_map = np.multiply(dy, np.transpose(dx))</span><br><span class="line">seman_map[y1:y2, x1:x2,<span class="number">0</span>] = np.maximum(seman_map[y1:y2, x1:x2,<span class="number">0</span>], gau_map) <span class="comment"># 在第0个通道上置高斯值</span></span><br><span class="line">seman_map[y1:y2, x1:x2,<span class="number">1</span>] = <span class="number">1</span> <span class="comment"># 前景在第1个通道上置1</span></span><br><span class="line">seman_map[c_y, c_x, <span class="number">2</span>] = <span class="number">1</span> <span class="comment"># 在第2个通道上目标中心位置1</span></span><br><span class="line"></span><br><span class="line"><span class="keyword">if</span> scale == <span class="string">'h'</span>:</span><br><span class="line">scale_map[c_y-r:c_y+r+<span class="number">1</span>, c_x-r:c_x+r+<span class="number">1</span>, <span class="number">0</span>] = np.log(gts[ind, <span class="number">3</span>] - gts[ind, <span class="number">1</span>])</span><br><span class="line">scale_map[c_y-r:c_y+r+<span class="number">1</span>, c_x-r:c_x+r+<span class="number">1</span>, <span class="number">1</span>] = <span class="number">1</span></span><br><span class="line"><span class="keyword">elif</span> scale==<span class="string">'w'</span>:</span><br><span class="line">scale_map[c_y-r:c_y+r+<span class="number">1</span>, c_x-r:c_x+r+<span class="number">1</span>, <span class="number">0</span>] = np.log(gts[ind, <span class="number">2</span>] - gts[ind, <span class="number">0</span>])</span><br><span class="line">scale_map[c_y-r:c_y+r+<span class="number">1</span>, c_x-r:c_x+r+<span class="number">1</span>, <span class="number">1</span>] = <span class="number">1</span></span><br><span class="line"><span class="keyword">elif</span> scale==<span class="string">'hw'</span>:</span><br><span class="line">scale_map[c_y-r:c_y+r+<span class="number">1</span>, c_x-r:c_x+r+<span class="number">1</span>, <span class="number">0</span>] = np.log(gts[ind, <span class="number">3</span>] - gts[ind, <span class="number">1</span>])</span><br><span class="line">scale_map[c_y-r:c_y+r+<span class="number">1</span>, c_x-r:c_x+r+<span class="number">1</span>, <span class="number">1</span>] = np.log(gts[ind, <span class="number">2</span>] - gts[ind, <span class="number">0</span>])</span><br><span class="line">scale_map[c_y-r:c_y+r+<span class="number">1</span>, c_x-r:c_x+r+<span class="number">1</span>, <span class="number">2</span>] = <span class="number">1</span></span><br><span class="line"><span class="keyword">if</span> offset:</span><br><span class="line">offset_map[c_y, c_x, <span class="number">0</span>] = (gts[ind, <span class="number">1</span>] + gts[ind, <span class="number">3</span>]) / <span class="number">2</span> - c_y - <span class="number">0.5</span></span><br><span class="line">offset_map[c_y, c_x, <span class="number">1</span>] = (gts[ind, <span class="number">0</span>] + gts[ind, <span class="number">2</span>]) / <span class="number">2</span> - c_x - <span class="number">0.5</span></span><br><span class="line">offset_map[c_y, c_x, <span class="number">2</span>] = <span class="number">1</span></span><br><span class="line"></span><br><span class="line"><span class="keyword">if</span> offset:</span><br><span class="line"><span class="keyword">return</span> seman_map,scale_map,offset_map</span><br><span class="line"><span class="keyword">else</span>:</span><br><span class="line"><span class="keyword">return</span> seman_map, scale_map</span><br></pre></td></tr></table></figure><p>seman_map有三个位面,第一个是高斯值mask,第二个是学习权重,第三个是目标中心点的位置。</p><figure><img src="./20190529_CVPR19_CSP_seman_map0.png" alt="20190529_CVPR19_CSP_seman_map0"><figcaption>20190529_CVPR19_CSP_seman_map0</figcaption></figure><figure><img src="./20190529_CVPR19_CSP_seman_map2.png" alt="20190529_CVPR19_CSP_seman_map2"><figcaption>20190529_CVPR19_CSP_seman_map2</figcaption></figure><figure><img src="./20190529_CVPR19_CSP_scale_map0.png" alt="20190529_CVPR19_CSP_scale_map0"><figcaption>20190529_CVPR19_CSP_scale_map0</figcaption></figure><h2 id="网络结构">网络结构</h2><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">nn_p3p4p5</span><span class="params">(img_input=None, offset=True, num_scale=<span class="number">1</span>, trainable=False)</span>:</span></span><br><span class="line"> bn_axis = <span class="number">3</span></span><br><span class="line"> x = ZeroPadding2D((<span class="number">3</span>, <span class="number">3</span>))(img_input)</span><br><span class="line"> x = Convolution2D(<span class="number">64</span>, (<span class="number">7</span>, <span class="number">7</span>), strides=(<span class="number">2</span>, <span class="number">2</span>), name=<span class="string">'conv1'</span>, trainable=<span class="keyword">False</span>)(x)</span><br><span class="line"> x = BatchNormalization(axis=bn_axis, name=<span class="string">'bn_conv1'</span>)(x)</span><br><span class="line"> x = Activation(<span class="string">'relu'</span>)(x)</span><br><span class="line"> x = MaxPooling2D((<span class="number">3</span>, <span class="number">3</span>), strides=(<span class="number">2</span>, <span class="number">2</span>), padding=<span class="string">'same'</span>)(x)</span><br><span class="line"> x = conv_block(x, <span class="number">3</span>, [<span class="number">64</span>, <span class="number">64</span>, <span class="number">256</span>], stage=<span class="number">2</span>, block=<span class="string">'a'</span>, strides=(<span class="number">1</span>, <span class="number">1</span>), trainable=<span class="keyword">False</span>)</span><br><span class="line"> x = identity_block(x, <span class="number">3</span>, [<span class="number">64</span>, <span class="number">64</span>, <span class="number">256</span>], stage=<span class="number">2</span>, block=<span class="string">'b'</span>, trainable=<span class="keyword">False</span>)</span><br><span class="line"> stage2 = identity_block(x, <span class="number">3</span>, [<span class="number">64</span>, <span class="number">64</span>, <span class="number">256</span>], stage=<span class="number">2</span>, block=<span class="string">'c'</span>, trainable=<span class="keyword">False</span>)</span><br><span class="line"> <span class="comment"># print('stage2: ', stage2._keras_shape[1:])</span></span><br><span class="line"> x = conv_block(stage2, <span class="number">3</span>, [<span class="number">128</span>, <span class="number">128</span>, <span class="number">512</span>], stage=<span class="number">3</span>, block=<span class="string">'a'</span>, trainable=trainable)</span><br><span class="line"> x = identity_block(x, <span class="number">3</span>, [<span class="number">128</span>, <span class="number">128</span>, <span class="number">512</span>], stage=<span class="number">3</span>, block=<span class="string">'b'</span>, trainable=trainable)</span><br><span class="line"> x = identity_block(x, <span class="number">3</span>, [<span class="number">128</span>, <span class="number">128</span>, <span class="number">512</span>], stage=<span class="number">3</span>, block=<span class="string">'c'</span>, trainable=trainable)</span><br><span class="line"> stage3 = identity_block(x, <span class="number">3</span>, [<span class="number">128</span>, <span class="number">128</span>, <span class="number">512</span>], stage=<span class="number">3</span>, block=<span class="string">'d'</span>, trainable=trainable)</span><br><span class="line"> <span class="comment"># print('stage3: ', stage3._keras_shape[1:])</span></span><br><span class="line"> x = conv_block(stage3, <span class="number">3</span>, [<span class="number">256</span>, <span class="number">256</span>, <span class="number">1024</span>], stage=<span class="number">4</span>, block=<span class="string">'a'</span>, trainable=trainable)</span><br><span class="line"> x = identity_block(x, <span class="number">3</span>, [<span class="number">256</span>, <span class="number">256</span>, <span class="number">1024</span>], stage=<span class="number">4</span>, block=<span class="string">'b'</span>, trainable=trainable)</span><br><span class="line"> x = identity_block(x, <span class="number">3</span>, [<span class="number">256</span>, <span class="number">256</span>, <span class="number">1024</span>], stage=<span class="number">4</span>, block=<span class="string">'c'</span>, trainable=trainable)</span><br><span class="line"> x = identity_block(x, <span class="number">3</span>, [<span class="number">256</span>, <span class="number">256</span>, <span class="number">1024</span>], stage=<span class="number">4</span>, block=<span class="string">'d'</span>, trainable=trainable)</span><br><span class="line"> x = identity_block(x, <span class="number">3</span>, [<span class="number">256</span>, <span class="number">256</span>, <span class="number">1024</span>], stage=<span class="number">4</span>, block=<span class="string">'e'</span>, trainable=trainable)</span><br><span class="line"> stage4 = identity_block(x, <span class="number">3</span>, [<span class="number">256</span>, <span class="number">256</span>, <span class="number">1024</span>], stage=<span class="number">4</span>, block=<span class="string">'f'</span>, trainable=trainable)</span><br><span class="line"> <span class="comment"># print('stage4: ', stage4._keras_shape[1:])</span></span><br><span class="line"> x = conv_block(stage4, <span class="number">3</span>, [<span class="number">512</span>, <span class="number">512</span>, <span class="number">2048</span>], stage=<span class="number">5</span>, block=<span class="string">'a'</span>, strides=(<span class="number">1</span>, <span class="number">1</span>), dila=(<span class="number">2</span>, <span class="number">2</span>),</span><br><span class="line"> trainable=trainable)</span><br><span class="line"> x = identity_block(x, <span class="number">3</span>, [<span class="number">512</span>, <span class="number">512</span>, <span class="number">2048</span>], stage=<span class="number">5</span>, block=<span class="string">'b'</span>, dila=(<span class="number">2</span>, <span class="number">2</span>), trainable=trainable)</span><br><span class="line"> stage5 = identity_block(x, <span class="number">3</span>, [<span class="number">512</span>, <span class="number">512</span>, <span class="number">2048</span>], stage=<span class="number">5</span>, block=<span class="string">'c'</span>, dila=(<span class="number">2</span>, <span class="number">2</span>), trainable=trainable)</span><br><span class="line"> <span class="comment"># print('stage5: ', stage5._keras_shape[1:])</span></span><br><span class="line"></span><br><span class="line"> P3_up = Deconvolution2D(<span class="number">256</span>, kernel_size=<span class="number">4</span>, strides=<span class="number">2</span>, padding=<span class="string">'same'</span>,</span><br><span class="line"> kernel_initializer=<span class="string">'glorot_normal'</span>, name=<span class="string">'P3up'</span>, trainable=trainable)(stage3)</span><br><span class="line"> <span class="comment"># print('P3_up: ', P3_up._keras_shape[1:])</span></span><br><span class="line"> P4_up = Deconvolution2D(<span class="number">256</span>, kernel_size=<span class="number">4</span>, strides=<span class="number">4</span>, padding=<span class="string">'same'</span>,</span><br><span class="line"> kernel_initializer=<span class="string">'glorot_normal'</span>, name=<span class="string">'P4up'</span>, trainable=trainable)(stage4)</span><br><span class="line"> <span class="comment"># print('P4_up: ', P4_up._keras_shape[1:])</span></span><br><span class="line"> P5_up = Deconvolution2D(<span class="number">256</span>, kernel_size=<span class="number">4</span>, strides=<span class="number">4</span>, padding=<span class="string">'same'</span>,</span><br><span class="line"> kernel_initializer=<span class="string">'glorot_normal'</span>, name=<span class="string">'P5up'</span>, trainable=trainable)(stage5)</span><br><span class="line"> <span class="comment"># print('P5_up: ', P5_up._keras_shape[1:])</span></span><br><span class="line"></span><br><span class="line"> P3_up = L2Normalization(gamma_init=<span class="number">10</span>, name=<span class="string">'P3norm'</span>)(P3_up)</span><br><span class="line"> P4_up = L2Normalization(gamma_init=<span class="number">10</span>, name=<span class="string">'P4norm'</span>)(P4_up)</span><br><span class="line"> P5_up = L2Normalization(gamma_init=<span class="number">10</span>, name=<span class="string">'P5norm'</span>)(P5_up)</span><br><span class="line"> conc = Concatenate(axis=<span class="number">-1</span>)([P3_up, P4_up, P5_up])</span><br><span class="line"></span><br><span class="line"> feat = Convolution2D(<span class="number">256</span>, (<span class="number">3</span>, <span class="number">3</span>), padding=<span class="string">'same'</span>, kernel_initializer=<span class="string">'glorot_normal'</span>, name=<span class="string">'feat'</span>,</span><br><span class="line"> trainable=trainable)(conc)</span><br><span class="line"> feat = BatchNormalization(axis=bn_axis, name=<span class="string">'bn_feat'</span>)(feat)</span><br><span class="line"> feat = Activation(<span class="string">'relu'</span>)(feat)</span><br><span class="line"></span><br><span class="line"> x_class = Convolution2D(<span class="number">1</span>, (<span class="number">1</span>, <span class="number">1</span>), activation=<span class="string">'sigmoid'</span>,</span><br><span class="line"> kernel_initializer=<span class="string">'glorot_normal'</span>,</span><br><span class="line"> bias_initializer=prior_probability_onecls(probability=<span class="number">0.01</span>),</span><br><span class="line"> name=<span class="string">'center_cls'</span>, trainable=trainable)(feat)</span><br><span class="line"> x_regr = Convolution2D(num_scale, (<span class="number">1</span>, <span class="number">1</span>), activation=<span class="string">'linear'</span>, kernel_initializer=<span class="string">'glorot_normal'</span>,</span><br><span class="line"> name=<span class="string">'height_regr'</span>, trainable=trainable)(feat)</span><br><span class="line"></span><br><span class="line"> <span class="keyword">if</span> offset:</span><br><span class="line"> x_offset = Convolution2D(<span class="number">2</span>, (<span class="number">1</span>, <span class="number">1</span>), activation=<span class="string">'linear'</span>, kernel_initializer=<span class="string">'glorot_normal'</span>,</span><br><span class="line"> name=<span class="string">'offset_regr'</span>, trainable=trainable)(feat)</span><br><span class="line"> <span class="keyword">return</span> [x_class, x_regr, x_offset]</span><br><span class="line"> <span class="keyword">else</span>:</span><br><span class="line"> <span class="keyword">return</span> [x_class, x_regr]</span><br></pre></td></tr></table></figure><h2 id="loss">Loss</h2><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">cls_center</span><span class="params">(y_true, y_pred)</span>:</span></span><br><span class="line"></span><br><span class="line">classification_loss = K.binary_crossentropy(y_pred[:, :, :, <span class="number">0</span>], y_true[:, :, :, <span class="number">2</span>])</span><br><span class="line"><span class="comment"># firstly we compute the focal weight</span></span><br><span class="line">positives = y_true[:, :, :, <span class="number">2</span>]</span><br><span class="line">negatives = y_true[:, :, :, <span class="number">1</span>]-y_true[:, :, :, <span class="number">2</span>]</span><br><span class="line">foreground_weight = positives * (<span class="number">1.0</span> - y_pred[:, :, :, <span class="number">0</span>]) ** <span class="number">2.0</span></span><br><span class="line">background_weight = negatives * ((<span class="number">1.0</span> - y_true[:, :, :, <span class="number">0</span>])**<span class="number">4.0</span>)*(y_pred[:, :, :, <span class="number">0</span>] ** <span class="number">2.0</span>)</span><br><span class="line"></span><br><span class="line">focal_weight = foreground_weight + background_weight</span><br><span class="line"></span><br><span class="line">assigned_boxes = tf.reduce_sum(y_true[:, :, :, <span class="number">2</span>])</span><br><span class="line">class_loss = <span class="number">0.01</span>*tf.reduce_sum(focal_weight*classification_loss) / tf.maximum(<span class="number">1.0</span>, assigned_boxes)</span><br><span class="line"> assigned_boxes)</span><br><span class="line"></span><br><span class="line"><span class="keyword">return</span> class_loss</span><br></pre></td></tr></table></figure>]]></content>
<summary type="html">
CVPR-19 High-level Semantic Feature Detection A New Perspective for Pedestrian Detection
</summary>
<category term="Pedestrian Detection" scheme="https://www.starlg.cn/categories/Pedestrian-Detection/"/>
<category term="Pedestrian Detection" scheme="https://www.starlg.cn/tags/Pedestrian-Detection/"/>
<category term="Deep Learning" scheme="https://www.starlg.cn/tags/Deep-Learning/"/>
</entry>
<entry>
<title>Deep High-Resolution Representation Learning for Human Pose Estimation</title>
<link href="https://www.starlg.cn/2019/05/23/CVPR19-HRNet/"/>
<id>https://www.starlg.cn/2019/05/23/CVPR19-HRNet/</id>
<published>2019-05-23T06:29:55.000Z</published>
<updated>2022-05-30T14:16:36.000Z</updated>
<content type="html"><![CDATA[<p><a href="https://arxiv.org/abs/1902.09212" target="_blank" rel="noopener">Paper Link</a></p><p><a href="https://github.com/leoxiaobin/deep-high-resolution-net.pytorch" target="_blank" rel="noopener">Code</a></p><h1 id="介绍">介绍</h1><p>High-Resolution Net (HRNet)</p><p>这篇论文解决人体姿态估计问题,重点关注学习高分辨率表示。大多数现有的方法通过一个由高到低分辨率的网络,从一个低分辨率的表示中恢复高分辨率的表示。相反,本论文提出的网络自始至终都保持了高分辨率的表示。</p><p>作者从高分辨率子网作为第一个阶段,逐渐添加 高->低 分辨率的子网形成更多的阶段,并且并行连接这些多分辨率子网。作者多次进行多尺度融合,使得每一个高->低分辨率的表示可以从其他并行的表示中接收信息,从而生成丰富的高分辨率表示。因此,预测的关键点热图可以更准确,空间更精确。作者在COCO关键点检测数据集和MPII Human Pose数据集上进行了验证。</p><h1 id="与其他方法之前的区别">与其他方法之前的区别</h1><ol type="1"><li><p>该方法并行级联高->低分辨率子网,而不是以序列的方式。因此,该方法保持了高分辨率,而不是从低分辨率中恢复高分辨率。所以预测的热图的空间上更准确。</p></li><li><p>现存的融合策略集成底层(low-level)和高层(high-level)的表示。 而该方法通过相似深度和相同层级的低分辨率表示的帮助,执行重复的多尺度融合提升高分率表示。</p></li></ol><img src="/2019/05/23/CVPR19-HRNet/cVPR19-HRNet.png" title="Figure 1. 提出的HRNet的结构"><p>图1展示了提出的HRNet网络的结构。它包含并行的高->低分辨率的子网,重复的在不同分辨率子网之间的信息交换,即多尺度融合。水平和垂直方向分别对应于网络的深度和特征图的比例。</p><img src="/2019/05/23/CVPR19-HRNet/hRNet-framework.png" title="Figure 2. 依靠high-to-low和low-to-high框架的姿态估计网络结构"><p>图2展示了其他方法的一些网络结构,这些方法都是依靠high-to-low和low-to-high框架的姿态估计网络结构。其中(a)表示Hourglass网络,(b)表示Cascaded pyramid networks,(c)表示SimpleBaseline网络:转置卷积(transposed convolutions)用于low-to-high过程。(d)组合空洞卷积(dilated convolutions)。</p><h1 id="方法介绍">方法介绍</h1><h2 id="序列多尺度子网">序列多尺度子网</h2><p>用<span class="math inline">\(N_{sr}\)</span>表示子网络在第s个stage,r表示分辨率的序号,它的分辨率是第一个子网络分辨率的<span class="math inline">\(\frac{1}{r^{r-1}}\)</span>倍。有S=4个stege的high-to-low网络可以表示为:</p><p><span class="math display">\[N_{11} \to N_{22} \to N_{33} \to N_{44}\]</span></p><h2 id="并行多尺度子网">并行多尺度子网</h2><p>我们从一个高分辨率子网络作为第一个stage起始,逐渐地增加high-to-low分辨率子网络,形成新的sgates,并且并行地连接多分辨率子网络。因此,后一阶段并行子网的分辨率包括前一阶段的分辨率和一个更低的分辨率。</p><p>这里给出一个网络结构的例子,包含4个并行的子网络,如下:</p><img src="/2019/05/23/CVPR19-HRNet/hRNet-eq2.png"><h2 id="重复多尺度融合">重复多尺度融合</h2><img src="/2019/05/23/CVPR19-HRNet/hRNet-exchange-unit.png" title="Figure 3. Exchange Unit"><p>图3展示了交换单元(Exchange Unit)如何为高、中和底层融合信息的。右侧的注释表示:strided 3x3=stride 3x3卷积,up samp. 1x1=最近邻上采样和一个1x1卷积。</p><p>我们在不同的并行子网之间引入交换单元(exchange unit),这样每个子网可以重叠地从其他并行网络中接收信息。这里给出了一个交换信息框架的例子,如下图表示的结构。我们将第三个stage分成几个exchange blocks,并且每一个block有三个并行的卷积单元构成,一个交换单元在并行的卷积单元之间,如下:</p><img src="/2019/05/23/CVPR19-HRNet/hRNet-eq3.png"><p>其中,<span class="math inline">\(C^b_{sr}\)</span>表示在第s个stage,第b个block的第r分辨率的卷积单元。<span class="math inline">\(\varepsilon^b_s\)</span>是对应的交换的单元。</p><p>交换单元如图3所示。</p><img src="/2019/05/23/CVPR19-HRNet/hRNet-exchange-unit-2.png"><h2 id="热图估计">热图估计</h2><p>我们简单地从最后一个交换单元(exhcange unit)输出的高分辨率表示中回归热图。损失函数(定义为均方误差)用于比较预测的热图和groundtruth热图。通过应用2D高斯生成的groundtruth热图,其中标准偏差为1像素,并以每个关键点的标注位置为中心。</p><h2 id="网络实例">网络实例</h2><p>实验中提出了两种网络,一个小网络HRNet-W32,一个大网络HRNet-W48,其中32和48分别表示在后3个sgate中的高分辨率子网络的宽度(C)。对于HRNet-W32,其他三个并行的子网络的宽度分别是64,128,256,对于HRNet-W48是96,192,384。</p><h1 id="实验">实验</h1><img src="/2019/05/23/CVPR19-HRNet/hRNet-COCO-validation.png" title="Figure 1."><img src="/2019/05/23/CVPR19-HRNet/hRNet-COCO-test.png" title="Figure 1."><img src="/2019/05/23/CVPR19-HRNet/hRNet-MPII.png" title="Figure 1."><img src="/2019/05/23/CVPR19-HRNet/hRNet-Qualitative-Results.png" title="Figure 1."><img src="/2019/05/23/CVPR19-HRNet/hRNet-1x2x4x.png" title="Figure 1."><img src="/2019/05/23/CVPR19-HRNet/hRNet-SimpleBaseline-performance.png" title="Figure 1."><h1 id="代码">代码</h1><h2 id="exchange-unit">Exchange Unit</h2><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">_make_fuse_layers</span><span class="params">(self)</span>:</span></span><br><span class="line"> <span class="keyword">if</span> self.num_branches == <span class="number">1</span>:</span><br><span class="line"> <span class="keyword">return</span> <span class="keyword">None</span></span><br><span class="line"></span><br><span class="line"> num_branches = self.num_branches</span><br><span class="line"> num_inchannels = self.num_inchannels</span><br><span class="line"> fuse_layers = []</span><br><span class="line"> <span class="keyword">for</span> i <span class="keyword">in</span> range(num_branches <span class="keyword">if</span> self.multi_scale_output <span class="keyword">else</span> <span class="number">1</span>):</span><br><span class="line"> fuse_layer = []</span><br><span class="line"> <span class="keyword">for</span> j <span class="keyword">in</span> range(num_branches):</span><br><span class="line"> <span class="keyword">if</span> j > i:</span><br><span class="line"> fuse_layer.append(</span><br><span class="line"> nn.Sequential(</span><br><span class="line"> nn.Conv2d(</span><br><span class="line"> num_inchannels[j],</span><br><span class="line"> num_inchannels[i],</span><br><span class="line"> <span class="number">1</span>, <span class="number">1</span>, <span class="number">0</span>, bias=<span class="keyword">False</span></span><br><span class="line"> ),</span><br><span class="line"> nn.BatchNorm2d(num_inchannels[i]),</span><br><span class="line"> nn.Upsample(scale_factor=<span class="number">2</span>**(j-i), mode=<span class="string">'nearest'</span>)</span><br><span class="line"> )</span><br><span class="line"> )</span><br><span class="line"> <span class="keyword">elif</span> j == i:</span><br><span class="line"> fuse_layer.append(<span class="keyword">None</span>)</span><br><span class="line"> <span class="keyword">else</span>:</span><br><span class="line"> conv3x3s = []</span><br><span class="line"> <span class="keyword">for</span> k <span class="keyword">in</span> range(i-j):</span><br><span class="line"> <span class="keyword">if</span> k == i - j - <span class="number">1</span>:</span><br><span class="line"> num_outchannels_conv3x3 = num_inchannels[i]</span><br><span class="line"> conv3x3s.append(</span><br><span class="line"> nn.Sequential(</span><br><span class="line"> nn.Conv2d(</span><br><span class="line"> num_inchannels[j],</span><br><span class="line"> num_outchannels_conv3x3,</span><br><span class="line"> <span class="number">3</span>, <span class="number">2</span>, <span class="number">1</span>, bias=<span class="keyword">False</span></span><br><span class="line"> ),</span><br><span class="line"> nn.BatchNorm2d(num_outchannels_conv3x3)</span><br><span class="line"> )</span><br><span class="line"> )</span><br><span class="line"> <span class="keyword">else</span>:</span><br><span class="line"> num_outchannels_conv3x3 = num_inchannels[j]</span><br><span class="line"> conv3x3s.append(</span><br><span class="line"> nn.Sequential(</span><br><span class="line"> nn.Conv2d(</span><br><span class="line"> num_inchannels[j],</span><br><span class="line"> num_outchannels_conv3x3,</span><br><span class="line"> <span class="number">3</span>, <span class="number">2</span>, <span class="number">1</span>, bias=<span class="keyword">False</span></span><br><span class="line"> ),</span><br><span class="line"> nn.BatchNorm2d(num_outchannels_conv3x3),</span><br><span class="line"> nn.ReLU(<span class="keyword">True</span>)</span><br><span class="line"> )</span><br><span class="line"> )</span><br><span class="line"> fuse_layer.append(nn.Sequential(*conv3x3s))</span><br><span class="line"> fuse_layers.append(nn.ModuleList(fuse_layer))</span><br><span class="line"></span><br><span class="line"> <span class="keyword">return</span> nn.ModuleList(fuse_layers)</span><br></pre></td></tr></table></figure><h2 id="highresolutionmodule">HighResolutionModule</h2><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br><span class="line">101</span><br><span class="line">102</span><br><span class="line">103</span><br><span class="line">104</span><br><span class="line">105</span><br><span class="line">106</span><br><span class="line">107</span><br><span class="line">108</span><br><span class="line">109</span><br><span class="line">110</span><br><span class="line">111</span><br><span class="line">112</span><br><span class="line">113</span><br><span class="line">114</span><br><span class="line">115</span><br><span class="line">116</span><br><span class="line">117</span><br><span class="line">118</span><br><span class="line">119</span><br><span class="line">120</span><br><span class="line">121</span><br><span class="line">122</span><br><span class="line">123</span><br><span class="line">124</span><br><span class="line">125</span><br><span class="line">126</span><br><span class="line">127</span><br><span class="line">128</span><br><span class="line">129</span><br><span class="line">130</span><br><span class="line">131</span><br><span class="line">132</span><br><span class="line">133</span><br><span class="line">134</span><br><span class="line">135</span><br><span class="line">136</span><br><span class="line">137</span><br><span class="line">138</span><br><span class="line">139</span><br><span class="line">140</span><br><span class="line">141</span><br><span class="line">142</span><br><span class="line">143</span><br><span class="line">144</span><br><span class="line">145</span><br><span class="line">146</span><br><span class="line">147</span><br><span class="line">148</span><br><span class="line">149</span><br><span class="line">150</span><br><span class="line">151</span><br><span class="line">152</span><br><span class="line">153</span><br><span class="line">154</span><br><span class="line">155</span><br><span class="line">156</span><br><span class="line">157</span><br><span class="line">158</span><br><span class="line">159</span><br><span class="line">160</span><br><span class="line">161</span><br><span class="line">162</span><br><span class="line">163</span><br><span class="line">164</span><br><span class="line">165</span><br></pre></td><td class="code"><pre><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">HighResolutionModule</span><span class="params">(nn.Module)</span>:</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">__init__</span><span class="params">(self, num_branches, blocks, num_blocks, num_inchannels,</span></span></span><br><span class="line"><span class="function"><span class="params"> num_channels, fuse_method, multi_scale_output=True)</span>:</span></span><br><span class="line"> super(HighResolutionModule, self).__init__()</span><br><span class="line"> self._check_branches(</span><br><span class="line"> num_branches, blocks, num_blocks, num_inchannels, num_channels)</span><br><span class="line"></span><br><span class="line"> self.num_inchannels = num_inchannels</span><br><span class="line"> self.fuse_method = fuse_method</span><br><span class="line"> self.num_branches = num_branches</span><br><span class="line"></span><br><span class="line"> self.multi_scale_output = multi_scale_output</span><br><span class="line"></span><br><span class="line"> self.branches = self._make_branches(</span><br><span class="line"> num_branches, blocks, num_blocks, num_channels)</span><br><span class="line"> self.fuse_layers = self._make_fuse_layers()</span><br><span class="line"> self.relu = nn.ReLU(<span class="keyword">True</span>)</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">_check_branches</span><span class="params">(self, num_branches, blocks, num_blocks,</span></span></span><br><span class="line"><span class="function"><span class="params"> num_inchannels, num_channels)</span>:</span></span><br><span class="line"> <span class="keyword">if</span> num_branches != len(num_blocks):</span><br><span class="line"> error_msg = <span class="string">'NUM_BRANCHES({}) <> NUM_BLOCKS({})'</span>.format(</span><br><span class="line"> num_branches, len(num_blocks))</span><br><span class="line"> logger.error(error_msg)</span><br><span class="line"> <span class="keyword">raise</span> ValueError(error_msg)</span><br><span class="line"></span><br><span class="line"> <span class="keyword">if</span> num_branches != len(num_channels):</span><br><span class="line"> error_msg = <span class="string">'NUM_BRANCHES({}) <> NUM_CHANNELS({})'</span>.format(</span><br><span class="line"> num_branches, len(num_channels))</span><br><span class="line"> logger.error(error_msg)</span><br><span class="line"> <span class="keyword">raise</span> ValueError(error_msg)</span><br><span class="line"></span><br><span class="line"> <span class="keyword">if</span> num_branches != len(num_inchannels):</span><br><span class="line"> error_msg = <span class="string">'NUM_BRANCHES({}) <> NUM_INCHANNELS({})'</span>.format(</span><br><span class="line"> num_branches, len(num_inchannels))</span><br><span class="line"> logger.error(error_msg)</span><br><span class="line"> <span class="keyword">raise</span> ValueError(error_msg)</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">_make_one_branch</span><span class="params">(self, branch_index, block, num_blocks, num_channels,</span></span></span><br><span class="line"><span class="function"><span class="params"> stride=<span class="number">1</span>)</span>:</span></span><br><span class="line"> downsample = <span class="keyword">None</span></span><br><span class="line"> <span class="keyword">if</span> stride != <span class="number">1</span> <span class="keyword">or</span> \</span><br><span class="line"> self.num_inchannels[branch_index] != num_channels[branch_index] * block.expansion:</span><br><span class="line"> downsample = nn.Sequential(</span><br><span class="line"> nn.Conv2d(</span><br><span class="line"> self.num_inchannels[branch_index],</span><br><span class="line"> num_channels[branch_index] * block.expansion,</span><br><span class="line"> kernel_size=<span class="number">1</span>, stride=stride, bias=<span class="keyword">False</span></span><br><span class="line"> ),</span><br><span class="line"> nn.BatchNorm2d(</span><br><span class="line"> num_channels[branch_index] * block.expansion,</span><br><span class="line"> momentum=BN_MOMENTUM</span><br><span class="line"> ),</span><br><span class="line"> )</span><br><span class="line"></span><br><span class="line"> layers = []</span><br><span class="line"> layers.append(</span><br><span class="line"> block(</span><br><span class="line"> self.num_inchannels[branch_index],</span><br><span class="line"> num_channels[branch_index],</span><br><span class="line"> stride,</span><br><span class="line"> downsample</span><br><span class="line"> )</span><br><span class="line"> )</span><br><span class="line"> self.num_inchannels[branch_index] = \</span><br><span class="line"> num_channels[branch_index] * block.expansion</span><br><span class="line"> <span class="keyword">for</span> i <span class="keyword">in</span> range(<span class="number">1</span>, num_blocks[branch_index]):</span><br><span class="line"> layers.append(</span><br><span class="line"> block(</span><br><span class="line"> self.num_inchannels[branch_index],</span><br><span class="line"> num_channels[branch_index]</span><br><span class="line"> )</span><br><span class="line"> )</span><br><span class="line"></span><br><span class="line"> <span class="keyword">return</span> nn.Sequential(*layers)</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">_make_branches</span><span class="params">(self, num_branches, block, num_blocks, num_channels)</span>:</span></span><br><span class="line"> branches = []</span><br><span class="line"></span><br><span class="line"> <span class="keyword">for</span> i <span class="keyword">in</span> range(num_branches):</span><br><span class="line"> branches.append(</span><br><span class="line"> self._make_one_branch(i, block, num_blocks, num_channels)</span><br><span class="line"> )</span><br><span class="line"></span><br><span class="line"> <span class="keyword">return</span> nn.ModuleList(branches)</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">_make_fuse_layers</span><span class="params">(self)</span>:</span></span><br><span class="line"> <span class="keyword">if</span> self.num_branches == <span class="number">1</span>:</span><br><span class="line"> <span class="keyword">return</span> <span class="keyword">None</span></span><br><span class="line"></span><br><span class="line"> num_branches = self.num_branches</span><br><span class="line"> num_inchannels = self.num_inchannels</span><br><span class="line"> fuse_layers = []</span><br><span class="line"> <span class="keyword">for</span> i <span class="keyword">in</span> range(num_branches <span class="keyword">if</span> self.multi_scale_output <span class="keyword">else</span> <span class="number">1</span>):</span><br><span class="line"> fuse_layer = []</span><br><span class="line"> <span class="keyword">for</span> j <span class="keyword">in</span> range(num_branches):</span><br><span class="line"> <span class="keyword">if</span> j > i:</span><br><span class="line"> fuse_layer.append(</span><br><span class="line"> nn.Sequential(</span><br><span class="line"> nn.Conv2d(</span><br><span class="line"> num_inchannels[j],</span><br><span class="line"> num_inchannels[i],</span><br><span class="line"> <span class="number">1</span>, <span class="number">1</span>, <span class="number">0</span>, bias=<span class="keyword">False</span></span><br><span class="line"> ),</span><br><span class="line"> nn.BatchNorm2d(num_inchannels[i]),</span><br><span class="line"> nn.Upsample(scale_factor=<span class="number">2</span>**(j-i), mode=<span class="string">'nearest'</span>)</span><br><span class="line"> )</span><br><span class="line"> )</span><br><span class="line"> <span class="keyword">elif</span> j == i:</span><br><span class="line"> fuse_layer.append(<span class="keyword">None</span>)</span><br><span class="line"> <span class="keyword">else</span>:</span><br><span class="line"> conv3x3s = []</span><br><span class="line"> <span class="keyword">for</span> k <span class="keyword">in</span> range(i-j):</span><br><span class="line"> <span class="keyword">if</span> k == i - j - <span class="number">1</span>:</span><br><span class="line"> num_outchannels_conv3x3 = num_inchannels[i]</span><br><span class="line"> conv3x3s.append(</span><br><span class="line"> nn.Sequential(</span><br><span class="line"> nn.Conv2d(</span><br><span class="line"> num_inchannels[j],</span><br><span class="line"> num_outchannels_conv3x3,</span><br><span class="line"> <span class="number">3</span>, <span class="number">2</span>, <span class="number">1</span>, bias=<span class="keyword">False</span></span><br><span class="line"> ),</span><br><span class="line"> nn.BatchNorm2d(num_outchannels_conv3x3)</span><br><span class="line"> )</span><br><span class="line"> )</span><br><span class="line"> <span class="keyword">else</span>:</span><br><span class="line"> num_outchannels_conv3x3 = num_inchannels[j]</span><br><span class="line"> conv3x3s.append(</span><br><span class="line"> nn.Sequential(</span><br><span class="line"> nn.Conv2d(</span><br><span class="line"> num_inchannels[j],</span><br><span class="line"> num_outchannels_conv3x3,</span><br><span class="line"> <span class="number">3</span>, <span class="number">2</span>, <span class="number">1</span>, bias=<span class="keyword">False</span></span><br><span class="line"> ),</span><br><span class="line"> nn.BatchNorm2d(num_outchannels_conv3x3),</span><br><span class="line"> nn.ReLU(<span class="keyword">True</span>)</span><br><span class="line"> )</span><br><span class="line"> )</span><br><span class="line"> fuse_layer.append(nn.Sequential(*conv3x3s))</span><br><span class="line"> fuse_layers.append(nn.ModuleList(fuse_layer))</span><br><span class="line"></span><br><span class="line"> <span class="keyword">return</span> nn.ModuleList(fuse_layers)</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">get_num_inchannels</span><span class="params">(self)</span>:</span></span><br><span class="line"> <span class="keyword">return</span> self.num_inchannels</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">forward</span><span class="params">(self, x)</span>:</span></span><br><span class="line"> <span class="keyword">if</span> self.num_branches == <span class="number">1</span>:</span><br><span class="line"> <span class="keyword">return</span> [self.branches[<span class="number">0</span>](x[<span class="number">0</span>])]</span><br><span class="line"></span><br><span class="line"> <span class="keyword">for</span> i <span class="keyword">in</span> range(self.num_branches):</span><br><span class="line"> x[i] = self.branches[i](x[i])</span><br><span class="line"></span><br><span class="line"> x_fuse = []</span><br><span class="line"></span><br><span class="line"> <span class="keyword">for</span> i <span class="keyword">in</span> range(len(self.fuse_layers)):</span><br><span class="line"> y = x[<span class="number">0</span>] <span class="keyword">if</span> i == <span class="number">0</span> <span class="keyword">else</span> self.fuse_layers[i][<span class="number">0</span>](x[<span class="number">0</span>])</span><br><span class="line"> <span class="keyword">for</span> j <span class="keyword">in</span> range(<span class="number">1</span>, self.num_branches):</span><br><span class="line"> <span class="keyword">if</span> i == j:</span><br><span class="line"> y = y + x[j]</span><br><span class="line"> <span class="keyword">else</span>:</span><br><span class="line"> y = y + self.fuse_layers[i][j](x[j])</span><br><span class="line"> x_fuse.append(self.relu(y))</span><br><span class="line"></span><br><span class="line"> <span class="keyword">return</span> x_fuse</span><br></pre></td></tr></table></figure>]]></content>
<summary type="html">
CVPR-2019 Deep High-Resolution Representation Learning for Human Pose Estimation
</summary>
<category term="Deep Learning" scheme="https://www.starlg.cn/categories/Deep-Learning/"/>
<category term="Deep Learning" scheme="https://www.starlg.cn/tags/Deep-Learning/"/>
<category term="Pose Estimation" scheme="https://www.starlg.cn/tags/Pose-Estimation/"/>
</entry>
<entry>
<title>Adaptive NMS Refining Pedestrian Detection in a Crowd</title>
<link href="https://www.starlg.cn/2019/05/20/Adaptive-NMS/"/>
<id>https://www.starlg.cn/2019/05/20/Adaptive-NMS/</id>
<published>2019-05-20T09:11:31.000Z</published>
<updated>2022-05-30T14:16:36.000Z</updated>
<content type="html"><![CDATA[<p><a href="http://arxiv.org/abs/1904.03629" target="_blank" rel="noopener">paper link</a></p><h1 id="简介">简介</h1><p>这篇论文主要提出一个新颖的非极大值抑制(Non-Maximum Suppression, NMS)算法更好地改善检测器给出的检测框。本文主要贡献:</p><ol type="1"><li>提出adaptive-NMS,该算法根据目标的密度使用一个动态抑制阈值。</li><li>设计一个高效网络学习密度得分,这个得分可以方便地嵌入到single-stage和two-stage检测器中。</li><li>实现了CityPersons和CrowdHuman数据集上的 state of the art 结果。</li></ol><h1 id="motivation">Motivation</h1><img src="/2019/05/20/Adaptive-NMS/2019-05-20-greddy-NMS-results.png" title="Figure 1. greedy-NMS不同阈值的结果"><p>图1展示了不同阈值下的greedy-NMS的结果。蓝色的框表示丢失的目标,红色的框表示假正例(false positives)。(b)中的检测框是Faster R-CNN在NMS之前的检测结果。如图c,一个低的NMS阈值可能会移除正例(true positives)。如同d,一个高的NMS阈值可能会增加假正例(false positives)。</p><p>在本文中,作者提出了一种新的NMS算法,名为adaptive-NMS,它可以作为人群中行人检测的更有效的替代方案。直观地,高NMS阈值保持更多拥挤的实例,而低NMS阈值消除更多误报。因此,自适应NMS应用动态抑制策略,其中阈值随着实例聚集和相互遮挡而上升,并且当实例单独出现时衰减。为此,我们设计了一个辅助且可学习的子网络来预测每个实例的自适应NMS阈值。</p><h1 id="adaptive-nms">Adaptive-NMS</h1><img src="/2019/05/20/Adaptive-NMS/adaptive-nms-pseudo-code.png" title="Figure 2. adaptive-NMS伪代码"><p>当物体处于拥挤区域时,增加NMS的阈值可以保留高覆盖率。同样,在稀疏场景下,应该去掉重复度高的候选框,因为它们很可能是假正例。</p><p><span class="math display">\[d_i:= \max_{b_j \in \mathcal{G}, i \neq j} \mathrm{iou}(b_i, b_j)\]</span></p><p>目标<span class="math inline">\(i\)</span>的密度被定义和在ground truth集合<span class="math inline">\(\mathcal{G}\)</span>中的其他目标的最大紧致框的IoU的值。目标的密度表示拥挤遮挡的程度。</p><p>使用这个定义,我们提出更新下面策略中的移除步骤,</p><p><span class="math display">\[N_\mathcal{M} := \max(N_t, d_\mathcal{M})\]</span></p><img src="/2019/05/20/Adaptive-NMS/2019-05-20-greddy-NMS-eq3.png"><p><span class="math inline">\(N_t\)</span>表示对于<span class="math inline">\(\mathcal{M}\)</span>的adaptive NMS的阈值,<span class="math inline">\(d_{\mathcal{M}}\)</span>表示目标<span class="math inline">\(\mathcal{M}\)</span>覆盖的密度。</p><p>这个抑制策略有三个性质: 1. 当相邻的框远离<span class="math inline">\(\mathcal{M}\)</span>时,即<span class="math inline">\(\mathrm{iou}(\mathcal{M}, b_i) < N_t\)</span>,它们与原始NMS保持一致。 2. 如果<span class="math inline">\(\mathcal{M}\)</span>定位到拥挤的区域,即<span class="math inline">\(d_{\mathcal{M}} > N_t\)</span>,<span class="math inline">\(\mathcal{M}\)</span>的密度被使用作为adaptive NMS的阈值。 3. 对与稀疏区域的目标,即<span class="math inline">\(d_{\mathcal{M}} \leq N_t\)</span>,NMS阈值<span class="math inline">\(N_\mathcal{M}\)</span>和原始NMS阈值相等,非常接近的框被作为假正例所抑制。</p><p>这个算法具体步骤如图2所示。</p><h1 id="density-prediction">Density Prediction</h1><img src="/2019/05/20/Adaptive-NMS/CVPR19_CSP_Adaptive_NMS.png" title="Figure 3. 密度估计网络"><p>作者把密度估计作为一个回归问题,目标密度值的计算根据它的定义,使用Smooth-L1损失函数作为训练损失。</p><p>一个天然的方式就是为这个回归在网络顶部添加一个并行的层,像分类和定位一样。然而,用于检测的特征仅仅包含目标自己的信息,比如外表、语义特征和位置。对于密度估计,使用独立目标的信息很难估计其密度,密度估计需要使用其周围目标的更多的线索。</p><p>为了解决这个,作者设计了一个额外的网络,它由三层卷积层构成,如图3所示。首选使用一个1x1的卷积层做特征维度降维,然后级联降维后的特征、用于RPN分类的特征和用于RPN回归的特征。最后使用一个大尺度的卷积核5x5作为最后的卷积层,为了把周围的信息送入网络。具体如图中Density subnet绿色框区域结构。</p><h1 id="experiments">Experiments</h1><img src="/2019/05/20/Adaptive-NMS/2019-05-20-Adaptive-NMS-Comparison.png" title="Table 2. 在CityPersons验证集上的性能"><img src="/2019/05/20/Adaptive-NMS/2019-05-20-Adaptive-NMS-compare-detection-results.png" title="Figure 5. 部分结果对比"><img src="/2019/05/20/Adaptive-NMS/2019-05-20-Adaptive-NMS-Comparison-CityPersons-test.png" title="Table 3. 在CityPersons测试集上的性能"><img src="/2019/05/20/Adaptive-NMS/2019-05-20-Adaptive-NMS-CrowdHuman-val.png" title="Table 5. 在CrowdHuman验证集上full body的测试结果">]]></content>
<summary type="html">
CVPR-19 oral Adaptive NMS Refining Pedestrian Detection in a Crowd
</summary>
<category term="Pedestrian Detection" scheme="https://www.starlg.cn/categories/Pedestrian-Detection/"/>
<category term="Pedestrian Detection" scheme="https://www.starlg.cn/tags/Pedestrian-Detection/"/>
<category term="Deep Learning" scheme="https://www.starlg.cn/tags/Deep-Learning/"/>
</entry>
<entry>
<title>Squeeze-and-Excitation Networks</title>
<link href="https://www.starlg.cn/2019/05/17/Squeeze-and-Excitation-Networks/"/>
<id>https://www.starlg.cn/2019/05/17/Squeeze-and-Excitation-Networks/</id>
<published>2019-05-17T06:57:27.000Z</published>
<updated>2022-05-30T14:16:36.000Z</updated>
<content type="html"><![CDATA[<h1 id="senet介绍">SENet介绍</h1><p>卷积神经网络(CNNs)的核心模块是卷积操作,这个操作使得网络能够通过每层的局部感受野融合空间和通道的信息,来构建有信息的特征。之前大量的工作已经研究了这种关系的空间组成部分,试图通过提高整个特征层次中空间编码的质量来增强CNN的表征能力。在这项工作中,作者将重点放在通道关系上,并且提出一个新的构架单元,成为“Squeeze-and-Excitation”(SE)块,通过明确地建模通道之间的相互依赖性来自适应地重新校准通道方面的特征响应。</p><h1 id="squeeze-and-excitation-blocks">Squeeze-and-Excitation Blocks</h1><img src="/2019/05/17/Squeeze-and-Excitation-Networks/sENet.png"><img src="/2019/05/17/Squeeze-and-Excitation-Networks/sENet-eq1.png"><h2 id="squeeze-全局信息嵌入">Squeeze: 全局信息嵌入</h2><img src="/2019/05/17/Squeeze-and-Excitation-Networks/sENet-eq2.png"><h2 id="excition-适应性地校准">Excition: 适应性地校准</h2><img src="/2019/05/17/Squeeze-and-Excitation-Networks/sENet-eq3.png"><img src="/2019/05/17/Squeeze-and-Excitation-Networks/sENet-eq4.png"><h1 id="实例化到resnet和inception">实例化到ResNet和Inception</h1><img src="/2019/05/17/Squeeze-and-Excitation-Networks/sE-Inception.png" title="Figure 1. SE-Inception module"><img src="/2019/05/17/Squeeze-and-Excitation-Networks/sE-ResNet.png" title="Figure 1. SE-Inception module"><img src="/2019/05/17/Squeeze-and-Excitation-Networks/sENet-Table1.png" title="Figure 1. SE-Inception module"><h1 id="代码">代码</h1><h2 id="caffe">Caffe</h2><p><a href="https://github.com/hujie-frank/SENet" target="_blank" rel="noopener">Caffe SENet</a></p><h2 id="第三方实现">第三方实现</h2><ol start="0" type="1"><li>Caffe. SE-mudolues are integrated with a modificated ResNet-50 using a stride 2 in the 3x3 convolution instead of the first 1x1 convolution which obtains better performance: <a href="https://github.com/shicai/SENet-Caffe" target="_blank" rel="noopener">Repository</a>.</li><li>TensorFlow. SE-modules are integrated with a pre-activation ResNet-50 which follows the setup in <a href="https://github.com/facebook/fb.resnet.torch" target="_blank" rel="noopener">fb.resnet.torch</a>: <a href="https://github.com/ppwwyyxx/tensorpack/tree/master/examples/ResNet" target="_blank" rel="noopener">Repository</a>.</li><li>TensorFlow. Simple Tensorflow implementation of SENets using Cifar10: <a href="https://github.com/taki0112/SENet-Tensorflow" target="_blank" rel="noopener">Repository</a>.</li><li>MatConvNet. All the released SENets are imported into <a href="https://github.com/vlfeat/matconvnet" target="_blank" rel="noopener">MatConvNet</a>: <a href="https://github.com/albanie/mcnSENets" target="_blank" rel="noopener">Repository</a>.</li><li>MXNet. SE-modules are integrated with the ResNeXt and more architectures are coming soon: <a href="https://github.com/bruinxiong/SENet.mxnet" target="_blank" rel="noopener">Repository</a>.</li><li>PyTorch. Implementation of SENets by PyTorch: <a href="https://github.com/moskomule/senet.pytorch" target="_blank" rel="noopener">Repository</a>.</li><li>Chainer. Implementation of SENets by Chainer: <a href="https://github.com/nutszebra/SENets" target="_blank" rel="noopener">Repository</a>.</li></ol><h2 id="pytorch实现se模块">Pytorch实现SE模块</h2><p>来自https://github.com/moskomule/senet.pytorch/blob/master/senet/se_module.py的se_module.py文件</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">from torch import nn</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">class SELayer(nn.Module):</span><br><span class="line"> def __init__(self, channel, reduction=16):</span><br><span class="line"> super(SELayer, self).__init__()</span><br><span class="line"> self.avg_pool = nn.AdaptiveAvgPool2d(1)</span><br><span class="line"> self.fc = nn.Sequential(</span><br><span class="line"> nn.Linear(channel, channel // reduction, bias=False),</span><br><span class="line"> nn.ReLU(inplace=True),</span><br><span class="line"> nn.Linear(channel // reduction, channel, bias=False),</span><br><span class="line"> nn.Sigmoid()</span><br><span class="line"> )</span><br><span class="line"></span><br><span class="line"> def forward(self, x):</span><br><span class="line"> b, c, _, _ = x.size()</span><br><span class="line"> y = self.avg_pool(x).view(b, c)</span><br><span class="line"> y = self.fc(y).view(b, c, 1, 1)</span><br><span class="line"> return x * y.expand_as(x)</span><br></pre></td></tr></table></figure>]]></content>
<summary type="html">
SENet
</summary>
<category term="Deep Learning" scheme="https://www.starlg.cn/categories/Deep-Learning/"/>
<category term="Deep Learning" scheme="https://www.starlg.cn/tags/Deep-Learning/"/>
</entry>
<entry>
<title>Pillow Tutorial</title>
<link href="https://www.starlg.cn/2019/04/11/Pillow-Tutorial/"/>
<id>https://www.starlg.cn/2019/04/11/Pillow-Tutorial/</id>
<published>2019-04-11T07:56:55.000Z</published>
<updated>2022-05-30T14:16:36.000Z</updated>
<content type="html"><![CDATA[<h1 id="pillow-教程">Pillow 教程</h1><p>PIL(Python Image Library)是python的第三方图像处理库。其官方主页为:PIL。 PIL历史悠久,原来是只支持python2.x的版本的,后来出现了移植到python3的库pillow, pillow号称是friendly fork for PIL。</p><p>Pillow is the friendly PIL fork by Alex Clark and Contributors. PIL is the Python Imaging Library by Fredrik Lundh and Contributors.</p><p>for Python2 <a href="http://pythonware.com/products/pil/" target="_blank" rel="noopener">Python Imaging Library (PIL)</a></p><p>for Python3 <a href="http://www.effbot.org/imagingbook/" target="_blank" rel="noopener">The Python Imaging Library Handbook</a></p><p><a href="https://python-pillow.org/" target="_blank" rel="noopener">python-pillow</a></p><p><a href="https://github.com/python-pillow/Pillow" target="_blank" rel="noopener">Pillow Github</a></p><p><a href="https://pillow.readthedocs.io/en/stable/" target="_blank" rel="noopener">Pillow Docs</a></p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">from</span> PIL <span class="keyword">import</span> Image</span><br><span class="line"><span class="keyword">import</span> matplotlib.pyplot <span class="keyword">as</span> plt</span><br><span class="line">%matplotlib inline</span><br></pre></td></tr></table></figure><p>从一个文件加载图片,使用在<code>Image</code>模块下的<code>open()</code>函数。</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">im = Image.open(<span class="string">"lena.jpg"</span>)</span><br></pre></td></tr></table></figure><h2 id="使用image类">1. 使用Image类</h2><p>如果成功,这个函数会返回一个<code>Image</code>目标。你可以使用实例属性去测试这个文件的内容:</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">from</span> __future__ <span class="keyword">import</span> print_function</span><br><span class="line">print(im.format, im.size, im.mode)</span><br></pre></td></tr></table></figure><pre><code>JPEG (512, 512) RGB</code></pre><p><code>format</code>属性判断图片的来源。如果一个图片不是从一个文件读入,它会被设置成None。<code>size</code>属性是一个2元组,包含宽和高。<code>mode</code>属性定义图片通道的数量和名字,以及像素类型和深度。通常的模式包含:“L”(luminance)表示灰度图像,”RGB“表示真彩色图像,”CMYK“表示印前(pre-press)图像</p><p>如果文件没有被打开,会给出<code>IOError</code>。</p><p>一旦你有一个<code>Image</code>类的实例,你可以使用这个类定义的方法取处理这个图片。例如,显示我们刚加载的图片:</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">im.show()</span><br></pre></td></tr></table></figure><p>使用matplotlib进行可视化</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">plt.imshow(im)</span><br></pre></td></tr></table></figure><pre><code><matplotlib.image.AxesImage at 0x7f1173b85518></code></pre><figure><img src="output_12_1.png" alt="png"><figcaption>png</figcaption></figure><h2 id="读取和写入图像">2. 读取和写入图像</h2><h2 id="剪切复制和合并图像">3. 剪切、复制和合并图像</h2><p><code>Image</code>类包含允许您操作图像中的区域的方法。要从图像中提取矩形区域,请使用<code>crop()</code>方法。</p><h3 id="从一个图片中复制一个矩形区域">从一个图片中复制一个矩形区域</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">box = (<span class="number">100</span>, <span class="number">100</span>, <span class="number">400</span>, <span class="number">400</span>)</span><br><span class="line">region = im.crop(box)</span><br><span class="line">print(region.size)</span><br></pre></td></tr></table></figure><pre><code>(300, 300)</code></pre><p>这个区域使用一个4元组定义,它的坐标是(left, upper, right, lower)。Python Imaging Library在左上角使用(0, 0)的坐标系统。坐标表示像素之间的位置,因此上边例子中的区域是300x300像素。</p><h3 id="处理一个矩形区域并且把它粘贴回原来位置">处理一个矩形区域,并且把它粘贴回原来位置</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">region = region.transpose(Image.ROTATE_180)</span><br><span class="line">im.paste(region, box)</span><br><span class="line">plt.imshow(im)</span><br></pre></td></tr></table></figure><pre><code><matplotlib.image.AxesImage at 0x7f11721e2518></code></pre><figure><img src="output_20_1.png" alt="png"><figcaption>png</figcaption></figure><p>当把区域粘贴回去,区域的尺寸必须相同。除此之外,这个区域不能超越图像的边界。</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">roll</span><span class="params">(image, delta)</span>:</span></span><br><span class="line"> <span class="string">"""Roll an image sideways."""</span></span><br><span class="line"> xsize, ysize = image.size</span><br><span class="line"></span><br><span class="line"> delta = delta % xsize</span><br><span class="line"> <span class="keyword">if</span> delta == <span class="number">0</span>: <span class="keyword">return</span> image</span><br><span class="line"></span><br><span class="line"> part1 = image.crop((<span class="number">0</span>, <span class="number">0</span>, delta, ysize))</span><br><span class="line"> part2 = image.crop((delta, <span class="number">0</span>, xsize, ysize))</span><br><span class="line"> image.paste(part1, (xsize-delta, <span class="number">0</span>, xsize, ysize))</span><br><span class="line"> image.paste(part2, (<span class="number">0</span>, <span class="number">0</span>, xsize-delta, ysize))</span><br><span class="line"></span><br><span class="line"> <span class="keyword">return</span> image</span><br></pre></td></tr></table></figure><h3 id="拆分和合并通道">拆分和合并通道</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">r, g, b = im.split()</span><br><span class="line">im = Image.merge(<span class="string">"RGB"</span>, (b, g, r))</span><br><span class="line">plt.imshow(im)</span><br></pre></td></tr></table></figure><pre><code><matplotlib.image.AxesImage at 0x7f11721c9b38></code></pre><figure><img src="output_24_1.png" alt="png"><figcaption>png</figcaption></figure><h3 id="保存图像">保存图像</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">im.save(<span class="string">r'out.jpg'</span>)</span><br></pre></td></tr></table></figure><h3 id="新建图像">新建图像</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">newIm= Image.new(<span class="string">'RGB'</span>, (<span class="number">50</span>, <span class="number">50</span>), <span class="string">'red'</span>)</span><br><span class="line">plt.imshow(newIm)</span><br></pre></td></tr></table></figure><pre><code><matplotlib.image.AxesImage at 0x7f1175c2d630></code></pre><figure><img src="output_28_1.png" alt="png"><figcaption>png</figcaption></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># 十六进制颜色</span></span><br><span class="line">newIm = Image.new(<span class="string">'RGBA'</span>,(<span class="number">100</span>, <span class="number">50</span>), <span class="string">'#FF0000'</span>)</span><br><span class="line">plt.imshow(newIm)</span><br></pre></td></tr></table></figure><pre><code><matplotlib.image.AxesImage at 0x7f1175bfd8d0></code></pre><figure><img src="output_29_1.png" alt="png"><figcaption>png</figcaption></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># 传入元组形式的RGBA值或者RGB值</span></span><br><span class="line"><span class="comment"># 在RGB模式下,第四个参数失效,默认255,在RGBA模式下,也可只传入前三个值,A值默认255</span></span><br><span class="line">newIm = Image.new(<span class="string">'RGB'</span>,(<span class="number">200</span>, <span class="number">100</span>), (<span class="number">255</span>, <span class="number">255</span>, <span class="number">0</span>, <span class="number">120</span>))</span><br><span class="line">plt.imshow(newIm)</span><br></pre></td></tr></table></figure><pre><code><matplotlib.image.AxesImage at 0x7f1175bd1be0></code></pre><figure><img src="output_30_1.png" alt="png"><figcaption>png</figcaption></figure><h3 id="复制图片">复制图片</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">copyIm = im.copy()</span><br><span class="line">copyIm.size</span><br></pre></td></tr></table></figure><pre><code>(512, 512)</code></pre><h3 id="调整图片大小">调整图片大小</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">width, height = copyIm.size</span><br><span class="line">resizedIm = im.resize((width, int(<span class="number">0.5</span>* height)))</span><br><span class="line">resizedIm.size</span><br></pre></td></tr></table></figure><pre><code>(512, 256)</code></pre><h2 id="几何变换">4. 几何变换</h2><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">im = Image.open(<span class="string">"lena.jpg"</span>)</span><br><span class="line">out = im.resize((<span class="number">128</span>, <span class="number">128</span>))</span><br><span class="line">out = im.rotate(<span class="number">45</span>) <span class="comment"># degrees counter-clockwise</span></span><br><span class="line">plt.imshow(out)</span><br></pre></td></tr></table></figure><pre><code><matplotlib.image.AxesImage at 0x7f11721370b8></code></pre><figure><img src="output_36_1.png" alt="png"><figcaption>png</figcaption></figure><h3 id="转置图像">转置图像</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">out = im.transpose(Image.FLIP_LEFT_RIGHT)</span><br><span class="line">out = im.transpose(Image.FLIP_TOP_BOTTOM)</span><br><span class="line">out = im.transpose(Image.ROTATE_90)</span><br><span class="line">out = im.transpose(Image.ROTATE_180)</span><br><span class="line">out = im.transpose(Image.ROTATE_270)</span><br><span class="line">plt.imshow(out)</span><br></pre></td></tr></table></figure><pre><code><matplotlib.image.AxesImage at 0x7f1172111080></code></pre><figure><img src="output_38_1.png" alt="png"><figcaption>png</figcaption></figure><h2 id="颜色变换">5. 颜色变换</h2><h3 id="在不同models之间转换">在不同models之间转换</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">from</span> PIL <span class="keyword">import</span> Image</span><br><span class="line">im = Image.open(<span class="string">"lena.jpg"</span>).convert(<span class="string">"L"</span>)</span><br><span class="line">print(im.mode)</span><br></pre></td></tr></table></figure><pre><code>L</code></pre><h2 id="图片增强">6. 图片增强</h2><h3 id="滤波器">滤波器</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">from</span> PIL <span class="keyword">import</span> ImageFilter</span><br><span class="line">out = im.filter(ImageFilter.DETAIL)</span><br><span class="line">plt.imshow(out)</span><br></pre></td></tr></table></figure><pre><code><matplotlib.image.AxesImage at 0x7f1172073860></code></pre><figure><img src="output_44_1.png" alt="png"><figcaption>png</figcaption></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># 高斯模糊</span></span><br><span class="line">out = im.filter(ImageFilter.GaussianBlur)</span><br><span class="line">plt.imshow(out)</span><br></pre></td></tr></table></figure><pre><code><matplotlib.image.AxesImage at 0x7f117204f470></code></pre><figure><img src="output_45_1.png" alt="png"><figcaption>png</figcaption></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># 边缘增强</span></span><br><span class="line">im.filter(ImageFilter.EDGE_ENHANCE)</span><br><span class="line">plt.imshow(out)</span><br></pre></td></tr></table></figure><pre><code><matplotlib.image.AxesImage at 0x7f1171faf0b8></code></pre><figure><img src="output_46_1.png" alt="png"><figcaption>png</figcaption></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># 普通模糊</span></span><br><span class="line">im.filter(ImageFilter.BLUR)</span><br><span class="line"><span class="comment"># 找到边缘</span></span><br><span class="line">im.filter(ImageFilter.FIND_EDGES)</span><br><span class="line"><span class="comment"># 浮雕</span></span><br><span class="line">im.filter(ImageFilter.EMBOSS)</span><br><span class="line"><span class="comment"># 轮廓</span></span><br><span class="line">im.filter(ImageFilter.CONTOUR)</span><br><span class="line"><span class="comment"># 锐化</span></span><br><span class="line">im.filter(ImageFilter.SHARPEN)</span><br><span class="line"><span class="comment"># 平滑</span></span><br><span class="line">im.filter(ImageFilter.SMOOTH)</span><br><span class="line"><span class="comment"># 细节</span></span><br><span class="line">im.filter(ImageFilter.DETAIL)</span><br></pre></td></tr></table></figure><figure><img src="output_47_0.png" alt="png"><figcaption>png</figcaption></figure><h3 id="应用点变换">应用点变换</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># multiply each pixel by 1.2</span></span><br><span class="line">out = im.point(<span class="keyword">lambda</span> i: i * <span class="number">1.2</span>)</span><br></pre></td></tr></table></figure><h3 id="增强图像">增强图像</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">from</span> PIL <span class="keyword">import</span> ImageEnhance</span><br><span class="line"></span><br><span class="line">enh = ImageEnhance.Contrast(im)</span><br><span class="line">out = enh.enhance(<span class="number">1.3</span>)</span><br><span class="line">plt.imshow(out)</span><br></pre></td></tr></table></figure><pre><code><matplotlib.image.AxesImage at 0x7f1171f906a0></code></pre><figure><img src="output_51_1.png" alt="png"><figcaption>png</figcaption></figure><h2 id="图片序列">7. 图片序列</h2><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">from</span> PIL <span class="keyword">import</span> Image</span><br><span class="line"></span><br><span class="line">im = Image.open(<span class="string">"chi.gif"</span>)</span><br><span class="line">im.seek(<span class="number">1</span>) <span class="comment"># skip to the second frame</span></span><br><span class="line"></span><br><span class="line"><span class="keyword">try</span>:</span><br><span class="line"> <span class="keyword">while</span> <span class="number">1</span>:</span><br><span class="line"> im.seek(im.tell()+<span class="number">1</span>)</span><br><span class="line"> <span class="comment"># do something to im</span></span><br><span class="line"><span class="keyword">except</span> EOFError:</span><br><span class="line"> <span class="keyword">pass</span> <span class="comment"># end of sequence</span></span><br></pre></td></tr></table></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">plt.imshow(im)</span><br></pre></td></tr></table></figure><pre><code><matplotlib.image.AxesImage at 0x7f1171eec208></code></pre><figure><img src="output_54_1.png" alt="png"><figcaption>png</figcaption></figure><h3 id="使用imagesequence-iterator类">使用ImageSequence Iterator类</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">from</span> PIL <span class="keyword">import</span> ImageSequence</span><br><span class="line">count = <span class="number">1</span></span><br><span class="line"><span class="keyword">for</span> frame <span class="keyword">in</span> ImageSequence.Iterator(im):</span><br><span class="line"> <span class="comment"># ...do something to frame...</span></span><br><span class="line"> count += <span class="number">1</span></span><br><span class="line"> <span class="keyword">if</span> count % <span class="number">10</span> == <span class="number">0</span>:</span><br><span class="line"> plt.imshow(frame)</span><br><span class="line"> plt.show()</span><br></pre></td></tr></table></figure><figure><img src="output_56_0.png" alt="png"><figcaption>png</figcaption></figure><figure><img src="output_56_1.png" alt="png"><figcaption>png</figcaption></figure><figure><img src="output_56_2.png" alt="png"><figcaption>png</figcaption></figure><h2 id="postscript-printing">8. Postscript printing</h2><h3 id="drawing-postscript">Drawing Postscript</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">from</span> PIL <span class="keyword">import</span> Image</span><br><span class="line"><span class="keyword">from</span> PIL <span class="keyword">import</span> PSDraw</span><br><span class="line"></span><br><span class="line">im = Image.open(<span class="string">"hopper.ppm"</span>)</span><br><span class="line">title = <span class="string">"hopper"</span></span><br><span class="line">box = (<span class="number">1</span>*<span class="number">72</span>, <span class="number">2</span>*<span class="number">72</span>, <span class="number">7</span>*<span class="number">72</span>, <span class="number">10</span>*<span class="number">72</span>) <span class="comment"># in points</span></span><br><span class="line"></span><br><span class="line">ps = PSDraw.PSDraw() <span class="comment"># default is sys.stdout</span></span><br><span class="line">ps.begin_document(title)</span><br><span class="line"></span><br><span class="line"><span class="comment"># draw the image (75 dpi)</span></span><br><span class="line">ps.image(box, im, <span class="number">75</span>)</span><br><span class="line">ps.rectangle(box)</span><br><span class="line"></span><br><span class="line"><span class="comment"># draw title</span></span><br><span class="line">ps.setfont(<span class="string">"HelveticaNarrow-Bold"</span>, <span class="number">36</span>)</span><br><span class="line">ps.text((<span class="number">3</span>*<span class="number">72</span>, <span class="number">4</span>*<span class="number">72</span>), title)</span><br><span class="line"></span><br><span class="line">ps.end_document()</span><br></pre></td></tr></table></figure><pre><code>%!PS-Adobe-3.0save/showpage { } def%%EndComments%%BeginDocument/S { show } bind def/P { moveto show } bind def/M { moveto } bind def/X { 0 rmoveto } bind def/Y { 0 exch rmoveto } bind def/E { findfont dup maxlength dict begin { 1 index /FID ne { def } { pop pop } ifelse } forall /Encoding exch def dup /FontName exch def currentdict end definefont pop} bind def/F { findfont exch scalefont dup setfont [ exch /setfont cvx ] cvx bind def} bind def/Vm { moveto } bind def/Va { newpath arcn stroke } bind def/Vl { moveto lineto stroke } bind def/Vc { newpath 0 360 arc closepath } bind def/Vr { exch dup 0 rlineto exch dup neg 0 exch rlineto exch neg 0 rlineto 0 exch rlineto 100 div setgray fill 0 setgray } bind def/Tm matrix def/Ve { Tm currentmatrix pop translate scale newpath 0 0 .5 0 360 arc closepath Tm setmatrix} bind def/Vf { currentgray exch setgray fill setgray } bind def%%EndProloggsave226.560000 370.560000 translate0.960000 0.960000 scalegsave10 dict begin/buf 384 string def128 128 scale128 128 8[128 0 0 -128 0 128]{ currentfile buf readhexstring pop } bindfalse 3 colorimage%%%%EndBinarygrestore endgrestore72 144 M 504 720 0 Vr/PSDraw-HelveticaNarrow-Bold ISOLatin1Encoding /HelveticaNarrow-Bold E/F0 36 /PSDraw-HelveticaNarrow-Bold F216 288 M (hopper) S%%EndDocumentrestore showpage%%End</code></pre><h2 id="更多关于图像读取">9. 更多关于图像读取</h2><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">from</span> PIL <span class="keyword">import</span> Image</span><br><span class="line">im = Image.open(<span class="string">"hopper.ppm"</span>)</span><br><span class="line">plt.imshow(im)</span><br></pre></td></tr></table></figure><pre><code><matplotlib.image.AxesImage at 0x7f1171d5a400></code></pre><figure><img src="output_60_1.png" alt="png"><figcaption>png</figcaption></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">from</span> PIL <span class="keyword">import</span> Image</span><br><span class="line"><span class="keyword">with</span> open(<span class="string">"hopper.ppm"</span>, <span class="string">"rb"</span>) <span class="keyword">as</span> fp:</span><br><span class="line"> im = Image.open(fp)</span><br><span class="line"> plt.imshow(im)</span><br></pre></td></tr></table></figure><figure><img src="output_61_0.png" alt="png"><figcaption>png</figcaption></figure><h2 id="控制解码器">10. 控制解码器</h2><h3 id="使用草稿draft模式读取">使用草稿(draft)模式读取</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">im = Image.open(<span class="string">"lena.jpg"</span>)</span><br><span class="line">print(<span class="string">"original ="</span>, im.mode, im.size)</span><br><span class="line"></span><br><span class="line">im.draft(<span class="string">"L"</span>, (<span class="number">100</span>, <span class="number">100</span>))</span><br><span class="line">print(<span class="string">"draft ="</span>, im.mode, im.size)</span><br></pre></td></tr></table></figure><pre><code>original = RGB (512, 512)draft = L (128, 128)</code></pre><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure>]]></content>
<summary type="html">
Pillow 教程
</summary>
<category term="Python" scheme="https://www.starlg.cn/categories/Python/"/>
<category term="Python" scheme="https://www.starlg.cn/tags/Python/"/>
</entry>
<entry>
<title>Focal Loss for Dense Object Detection</title>
<link href="https://www.starlg.cn/2019/01/10/Focal-Loss/"/>
<id>https://www.starlg.cn/2019/01/10/Focal-Loss/</id>
<published>2019-01-10T13:03:52.000Z</published>
<updated>2022-05-30T14:16:36.000Z</updated>
<content type="html"><![CDATA[<p>本文代码:https://github.com/facebookresearch/Detectron</p><p>本文主要解决在one-stage的密集检测器中,大量前景和背景不平衡的问题。作者通过降低被很好分类样本的权重来解决类别不平衡的问题。Focal Loss集中于在稀疏难样本(hard examples)上的训练,并且在训练中防止大量的容易的反例(easy negatives)淹没检测器。</p><ol type="1"><li>提出Focal Loss, 解决正负样本不平衡问题;</li><li>提出one-stage检测模型,RetinaNet。</li></ol><a id="more"></a><img src="/2019/01/10/Focal-Loss/focalLoss.png" title="Figure 1. Focal Loss"><p>作者提出一个新的损失函数 Focal Loss, 该方法通过添加因子<span class="math inline">\((1-p_t)^\gamma\)</span>到标准的交叉熵损失函数。设定<span class="math inline">\(\gamma>0\)</span>会减少被很好分类样本(<span class="math inline">\(p>0.5\)</span>)的相对损失,更加关注于难和错误分类的样本。</p><img src="/2019/01/10/Focal-Loss/seedVs.Accuracy.png" title="Figure 2. Speed (ms) versus accuracy (AP) on COCO test-dev."><p>图中显示了,RetinaNet检测器使用了focal loss,结果由于之前的one-stage和two-stage检测器。</p><h3 id="类别不平衡问题">类别不平衡问题</h3><p>在R-CNN这一类检测器中,类别不平衡问题通过two-stage级联和采样策略被解决。在候选区域提取阶段,Selective Search, EdgeBoxes, RPN等方法,缩小候选区域位置的数量到1~2k个,大量过滤掉了背景。在第二分类阶段,采样策略,例如固定前景背景比率(1:3),或者在线难样本挖掘(OHEM)方法被执行用于保持前景和背景的一个可控的平衡。</p><h2 id="focal-loss">Focal Loss</h2><p>Focal Loss被设计用以解决one-state目标检测器训练中大量正反例样本不平衡的问题,通常是(1:1000)。我们首先介绍二值分类的交叉熵损失:</p><p><span class="math display">\[ CE(p, y)=\begin{cases}-log(p) & y=1\\-log(1-p) & otherwise.\end{cases} \]</span></p><h3 id="balanced-cross-entropy">3.1 Balanced Cross Entropy</h3><p>一种通用的解决类别不平衡的方法是对类别1引入一个权重因子<span class="math inline">\(\alpha \in [0, 1]\)</span>,对类别class-1引入<span class="math inline">\(1-\alpha\)</span>。我们写成<span class="math inline">\(\alpha-\)</span>balanced CE loss:</p><p><span class="math display">\[CE(p_t)=-\alpha_t log(p_t)\]</span></p><h3 id="focal-loss-definition">3.2 Focal Loss Definition</h3><p>试验中显示,在训练dense detectors是遭遇的大量类别不平衡会压倒交叉熵损失。容易分类的样本会占损失的大部分,并且主导梯度。尽管<span class="math inline">\(\alpha\)</span>平衡了正负(positive/negative)样本的重要性,但是它没有区分易和难的样本(easy/hard)。相反,作者提出的更改过后的损失函数降低了容易样本的权重并且集中训练难反例样本。</p><p>我们定义focal loss:</p><p><span class="math display">\[FL(p_t)=-(1-p_t)^\gamma log(p_t)\]</span></p><h3 id="class-imbalance-and-two-stage-detectors">3.4 Class Imbalance and Two-stage Detectors</h3><p>Two-stage检测器通常没有使用<span class="math inline">\(\alpha-\)</span>balancing 或者我们提出的loss。代替这些,他们使用了两个机制来解决类别不平衡问题:(1) 一个两级的级联,(2) 有偏置的小批量采样。第一级联阶段的候选区域提取机制减少了大量可能的候选位置。重要的是,这些选择的候选框不是随机的,而是选择更像前景的可能位置,这样就移除了大量的容易的难反例样本(easy negatives)。第二阶段,有偏置的采样通常使用1:3比率的正负样本构建小批量(minibatches)。这个采样率类似<span class="math inline">\(\alpha-\)</span>balancing因子,并且通过采样来实现。作者提出的focal loss主要设计用于解决one-stage检测系统中的这些问题。</p><img src="/2019/01/10/Focal-Loss/ablationExperimentsForFocalLoss.png" title="Table 1. Ablation experiments for RetinaNet and Focal Loss (FL)."><img src="/2019/01/10/Focal-Loss/objectDetectionRetinanet.png" title="Table 2. Object detection single-model results (bounding box AP), vs. state-of-the-art on COCO test-dev. We">]]></content>
<summary type="html">
<p>本文代码:https://github.com/facebookresearch/Detectron</p>
<p>本文主要解决在one-stage的密集检测器中,大量前景和背景不平衡的问题。作者通过降低被很好分类样本的权重来解决类别不平衡的问题。Focal Loss集中于在稀疏难样本(hard examples)上的训练,并且在训练中防止大量的容易的反例(easy negatives)淹没检测器。</p>
<ol type="1">
<li>提出Focal Loss, 解决正负样本不平衡问题;</li>
<li>提出one-stage检测模型,RetinaNet。</li>
</ol>
</summary>
<category term="Object Detection" scheme="https://www.starlg.cn/categories/Object-Detection/"/>
<category term="Object Detection" scheme="https://www.starlg.cn/tags/Object-Detection/"/>
</entry>
<entry>
<title>Docker 安装与使用</title>
<link href="https://www.starlg.cn/2018/10/10/docker/"/>
<id>https://www.starlg.cn/2018/10/10/docker/</id>
<published>2018-10-10T08:20:12.000Z</published>
<updated>2022-05-30T14:16:36.000Z</updated>
<content type="html"><![CDATA[<h1 id="docker-安装与使用">Docker 安装与使用</h1><p>@(工具学习记录)[Docker]</p><h2 id="docker安装">1. Docker安装</h2><p>参考<a href="https://docs.docker.com/install/linux/docker-ce/ubuntu/" target="_blank" rel="noopener">官网教程</a></p><h3 id="卸载旧的版本">卸载旧的版本</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">$ sudo apt-get remove docker docker-engine docker.iSET UP THE REPOSITORY</span><br></pre></td></tr></table></figure><p>SET UP THE REPOSITORY</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">$ sudo apt-get update</span><br></pre></td></tr></table></figure><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">$ sudo apt-get install \</span><br><span class="line"> apt-transport-https \</span><br><span class="line"> ca-certificates \</span><br><span class="line"> curl \</span><br><span class="line"> software-properties-common</span><br></pre></td></tr></table></figure><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -</span><br></pre></td></tr></table></figure><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">$ sudo apt-key fingerprint 0EBFCD88</span><br><span class="line"></span><br><span class="line">pub 4096R/0EBFCD88 2017-02-22</span><br><span class="line"> Key fingerprint = 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88</span><br><span class="line">uid Docker Release (CE deb) <docker@docker.com></span><br><span class="line">sub 4096R/F273FCD8 2017-02-22</span><br></pre></td></tr></table></figure><p>x86_64/amd64 <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">$ sudo add-apt-repository \</span><br><span class="line"> "deb [arch=amd64] https://download.docker.com/linux/ubuntu \</span><br><span class="line"> $(lsb_release -cs) \</span><br><span class="line"> stable"</span><br></pre></td></tr></table></figure></p><h3 id="安装-docker-ce">安装 DOCKER CE</h3><p>更新包的索引 <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">$ sudo apt-get update</span><br></pre></td></tr></table></figure></p><p>安装最新版本的Docker CE <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">$ sudo apt-get install docker-ce</span><br></pre></td></tr></table></figure></p><p>安装特定版本Docker CE <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">$ apt-cache madison docker-ce</span><br><span class="line"></span><br><span class="line">docker-ce | 18.03.0~ce-0~ubuntu | https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages</span><br></pre></td></tr></table></figure></p><p>验证是否安装正确 <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">$ sudo docker run hello-world</span><br></pre></td></tr></table></figure></p><h2 id="nvidia-docker-安装">2.nvidia-docker 安装</h2><p>参考<a href="https://github.com/NVIDIA/nvidia-docker" target="_blank" rel="noopener">nvidia-docker</a> nstalling version 2.0</p><p>Debian-based distributions <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \</span><br><span class="line"> sudo apt-key add -</span><br><span class="line">distribution=$(. /etc/os-release;echo $ID$VERSION_ID)</span><br><span class="line">curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \</span><br><span class="line"> sudo tee /etc/apt/sources.list.d/nvidia-docker.list</span><br><span class="line">sudo apt-get update</span><br></pre></td></tr></table></figure></p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">sudo apt-get install nvidia-docker2</span><br><span class="line">sudo pkill -SIGHUP dockerd</span><br></pre></td></tr></table></figure><h2 id="detectron-配置">3. detectron 配置</h2><p>参考<a href="https://github.com/facebookresearch/Detectron/blob/master/INSTALL.md" target="_blank" rel="noopener">detectron install</a> <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">cd $DETECTRON/docker</span><br><span class="line">docker build -t detectron:c2-cuda9-cudnn7 .</span><br></pre></td></tr></table></figure></p><p>运行这个镜像 <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">nvidia-docker run --rm -it detectron:c2-cuda9-cudnn7 python detectron/tests/test_batch_permutation_op.py</span><br></pre></td></tr></table></figure></p><h2 id="docker-基本命令">4. docker 基本命令</h2><h3 id="对容器生命周期管理">对容器生命周期管理</h3><h4 id="run">run</h4><p>docker run :创建一个新的容器并运行一个命令 <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line">使用docker镜像nginx:latest以后台模式启动一个容器,并将容器命名为mynginx。</span><br><span class="line"></span><br><span class="line">docker run --name mynginx -d nginx:latest</span><br><span class="line">使用镜像nginx:latest以后台模式启动一个容器,并将容器的80端口映射到主机随机端口。</span><br><span class="line"></span><br><span class="line">docker run -P -d nginx:latest</span><br><span class="line">使用镜像 nginx:latest,以后台模式启动一个容器,将容器的 80 端口映射到主机的 80 端口,主机的目录 /data 映射到容器的 /data。</span><br><span class="line"></span><br><span class="line">docker run -p 80:80 -v /data:/data -d nginx:latest</span><br><span class="line">绑定容器的 8080 端口,并将其映射到本地主机 127.0.0.1 的 80 端口上。</span><br><span class="line"></span><br><span class="line">$ docker run -p 127.0.0.1:80:8080/tcp ubuntu bash</span><br><span class="line">使用镜像nginx:latest以交互模式启动一个容器,在容器内执行/bin/bash命令。</span><br><span class="line"></span><br><span class="line">runoob@runoob:~$ docker run -it nginx:latest /bin/bash</span><br><span class="line">root@b8573233d675:/#</span><br></pre></td></tr></table></figure></p><h4 id="startstoprestart">start/stop/restart</h4><h4 id="kill">kill</h4><h4 id="rm">rm</h4><p>docker rm :删除一个或多少容器</p><p>语法 <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">docker rm [OPTIONS] CONTAINER [CONTAINER...]</span><br></pre></td></tr></table></figure></p><p><strong>OPTIONS说明:</strong></p><ul><li><p>-f :通过SIGKILL信号强制删除一个运行中的容器</p></li><li><p>-l :移除容器间的网络连接,而非容器本身</p></li><li><p>-v :-v 删除与容器关联的卷docker rm :删除一个或多少容器</p></li></ul><p>语法 docker rm [OPTIONS] CONTAINER [CONTAINER...] OPTIONS说明:</p><p>-f :通过SIGKILL信号强制删除一个运行中的容器</p><p>-l :移除容器间的网络连接,而非容器本身</p><p>-v :-v 删除与容器关联的卷 <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">强制删除容器db01、db02</span><br><span class="line"></span><br><span class="line">docker rm -f db01 db02</span><br><span class="line">移除容器nginx01对容器db01的连接,连接名db</span><br><span class="line"></span><br><span class="line">docker rm -l db </span><br><span class="line">删除容器nginx01,并删除容器挂载的数据卷</span><br><span class="line"></span><br><span class="line">docker rm -v nginx01</span><br></pre></td></tr></table></figure></p><h4 id="pauseunpause">pause/unpause</h4><h4 id="create">create</h4><h4 id="exec">exec</h4><p>docker exec :在运行的容器中执行命令 <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">在容器mynginx中以交互模式执行容器内/root/runoob.sh脚本</span><br><span class="line"></span><br><span class="line">runoob@runoob:~$ docker exec -it mynginx /bin/sh /root/runoob.sh</span><br><span class="line">http://www.runoob.com/</span><br><span class="line">在容器mynginx中开启一个交互模式的终端</span><br><span class="line"></span><br><span class="line">runoob@runoob:~$ docker exec -i -t mynginx /bin/bash</span><br><span class="line">root@b1a0703e41e7:/#</span><br></pre></td></tr></table></figure></p><h3 id="commit-命令">commit 命令</h3><p>docker commit :从容器创建一个新的镜像。 - -a :提交的镜像作者; - -c :使用Dockerfile指令来创建镜像; - -m :提交时的说明文字; - -p :在commit时,将容器暂停。</p><p>将容器a404c6c174a2 保存为新的镜像,并添加提交人信息和说明信息。 <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">runoob@runoob:~$ docker commit -a "runoob.com" -m "my apache" a404c6c174a2 mymysql:v1 </span><br><span class="line">sha256:37af1236adef1544e8886be23010b66577647a40bc02c0885a6600b33ee28057</span><br><span class="line">runoob@runoob:~$ docker images mymysql:v1</span><br><span class="line">REPOSITORY TAG IMAGE ID CREATED SIZE</span><br><span class="line">mymysql v1 37af1236adef 15 seconds ago 329 MB</span><br></pre></td></tr></table></figure></p><h3 id="容器与本地之间拷贝文件">容器与本地之间拷贝文件</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">将主机./RS-MapReduce目录拷贝到容器30026605dcfe的/home/cloudera目录下。</span><br><span class="line">docker cp RS-MapReduce 30026605dcfe:/home/cloudera</span><br><span class="line"></span><br><span class="line">将容器30026605dcfe的/home/cloudera/RS-MapReduce目录拷贝到主机的/tmp目录中。</span><br><span class="line">docker cp 30026605dcfe:/home/cloudera/RS-MapReduce /tmp/</span><br></pre></td></tr></table></figure><h2 id="学习资源">5. 学习资源</h2><ul><li><a href="http://www.runoob.com/docker/docker-tutorial.html" target="_blank" rel="noopener">runoob docker</a></li><li><a href="https://zhuanlan.zhihu.com/p/23599229" target="_blank" rel="noopener">只要一小时,零基础入门Docker</a></li></ul>]]></content>
<summary type="html">
Docker 安装与使用
</summary>
<category term="docker" scheme="https://www.starlg.cn/tags/docker/"/>
</entry>
<entry>
<title>Conda使用指南</title>
<link href="https://www.starlg.cn/2018/09/09/Conda-Tutorials/"/>
<id>https://www.starlg.cn/2018/09/09/Conda-Tutorials/</id>
<published>2018-09-09T05:07:18.000Z</published>
<updated>2022-05-30T14:16:36.000Z</updated>
<content type="html"><![CDATA[<img src="/2018/09/09/Conda-Tutorials/anconda.png" title="Anconda"><p>Python渐渐成为最流行的编程语言之一,在数据分析、机器学习和深度学习等方向Python语言更是主流。Python的版本比较多,并且它的库也非常广泛,同时库和库之间存在很多依赖关系,所以在库的安装和版本的管理上很麻烦。Conda是一个管理版本和Python环境的工具,它使用起来非常容易。</p><p>首先你需要安装<a href="https://www.anaconda.com/" target="_blank" rel="noopener">Anconda</a>软件,点击链接<a href="https://www.anaconda.com/download/" target="_blank" rel="noopener">download</a>。选择对应的系统和版本类型。</p><h2 id="conda的环境管理">Conda的环境管理</h2><h3 id="创建环境">创建环境</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"># 创建一个名为python34的环境,指定Python版本是3.5(不用管是3.5.x,conda会为我们自动寻找3.5.x中的最新版本)</span><br><span class="line">conda create --name py35 python=3.5</span><br></pre></td></tr></table></figure><h3 id="激活环境">激活环境</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line"># 安装好后,使用activate激活某个环境</span><br><span class="line">activate py35 # for Windows</span><br><span class="line">source activate py35 # for Linux & Mac</span><br><span class="line">(py35) user@user-XPS-8920:~$</span><br><span class="line"> # 激活后,会发现terminal输入的地方多了py35的字样,实际上,此时系统做的事情就是把默认2.7环境从PATH中去除,再把3.4对应的命令加入PATH</span><br><span class="line"> </span><br><span class="line">(py35) user@user-XPS-8920:~$ python --version</span><br><span class="line">Python 3.5.5 :: Anaconda, Inc.</span><br><span class="line"># 可以得到`Python 3.5.5 :: Anaconda, Inc.`,即系统已经切换到了3.5的环境</span><br></pre></td></tr></table></figure><h3 id="返回主环境">返回主环境</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"># 如果想返回默认的python 2.7环境,运行</span><br><span class="line">deactivate py35 # for Windows</span><br><span class="line">source deactivate py35 # for Linux & Mac</span><br></pre></td></tr></table></figure><h3 id="删除环境">删除环境</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"># 删除一个已有的环境</span><br><span class="line">conda remove --name py35 --all</span><br></pre></td></tr></table></figure><h3 id="查看系统中的所有环境">查看系统中的所有环境</h3><p>用户安装的不同Python环境会放在<code>~/anaconda/envs</code>目录下。查看当前系统中已经安装了哪些环境,使用<code>conda info -e</code>。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">user@user-XPS-8920:~$ conda info -e</span><br><span class="line"># conda environments:</span><br><span class="line">#</span><br><span class="line">base * /home/user/anaconda2</span><br><span class="line">caffe /home/user/anaconda2/envs/caffe</span><br><span class="line">py35 /home/user/anaconda2/envs/py35</span><br><span class="line">tf /home/user/anaconda2/envs/tf</span><br></pre></td></tr></table></figure><h2 id="conda的包管理">Conda的包管理</h2><h3 id="安装库">安装库</h3><p>为当前环境安装库 <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"># numpy</span><br><span class="line">conda install numpy</span><br><span class="line"># conda会从从远程搜索numpy的相关信息和依赖项目</span><br></pre></td></tr></table></figure></p><p>### 查看已经安装的库</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"># 查看已经安装的packages</span><br><span class="line">conda list</span><br><span class="line"># 最新版的conda是从site-packages文件夹中搜索已经安装的包,可以显示出通过各种方式安装的包</span><br></pre></td></tr></table></figure><h3 id="查看某个环境的已安装包">查看某个环境的已安装包</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"># 查看某个指定环境的已安装包</span><br><span class="line">conda list -n py35</span><br></pre></td></tr></table></figure><h3 id="搜索package的信息">搜索package的信息</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"># 查找package信息</span><br><span class="line">conda search numpy</span><br></pre></td></tr></table></figure><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">Loading channels: done</span><br><span class="line"># Name Version Build Channel </span><br><span class="line">numpy 1.5.1 py26_1 pkgs/free </span><br><span class="line"></span><br><span class="line">...</span><br><span class="line"></span><br><span class="line">numpy 1.15.1 py37hec00662_0 anaconda/pkgs/main </span><br><span class="line">numpy 1.15.1 py37hec00662_0 pkgs/main</span><br></pre></td></tr></table></figure><h3 id="安装package到指定的环境">安装package到指定的环境</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"># 安装package</span><br><span class="line">conda install -n py35 numpy</span><br><span class="line"># 如果不用-n指定环境名称,则被安装在当前活跃环境</span><br><span class="line"># 也可以通过-c指定通过某个channel安装</span><br></pre></td></tr></table></figure><h3 id="更新package">更新package</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"># 更新package</span><br><span class="line">conda update -n py35 numpy</span><br></pre></td></tr></table></figure><h3 id="删除package">删除package</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"># 删除package</span><br><span class="line">conda remove -n py35 numpy</span><br></pre></td></tr></table></figure><h3 id="更新conda">更新conda</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"># 更新conda,保持conda最新</span><br><span class="line">conda update conda</span><br></pre></td></tr></table></figure><h3 id="更新anaconda">更新anaconda</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"># 更新anaconda</span><br><span class="line">conda update anaconda</span><br></pre></td></tr></table></figure><h3 id="更新python">更新Python</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"># 更新python</span><br><span class="line">conda update python</span><br><span class="line"># 假设当前环境是python 3.5, conda会将python升级为3.5.x系列的当前最新版本</span><br></pre></td></tr></table></figure><h2 id="设置国内镜像">设置国内镜像</h2><p>因为Anaconda.org的服务器在国外,所有有些库下载缓慢,可以使用清华Anaconda镜像源。</p><p>网站地址: <a href="https://mirrors.tuna.tsinghua.edu.cn/help/anaconda/" target="_blank" rel="noopener">清华大学开源软件镜像站</a></p><h3 id="anaconda镜像">Anaconda 镜像</h3><p>Anaconda 安装包可以到 https://mirrors.tuna.tsinghua.edu.cn/anaconda/archive/ 下载。</p><p>TUNA还提供了Anaconda仓库的镜像,运行以下命令: <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/</span><br><span class="line">conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/</span><br><span class="line">conda config --set show_channel_urls yes</span><br></pre></td></tr></table></figure></p><p>即可添加 Anaconda Python 免费仓库。</p><p>运行 <code>conda install numpy</code> 测试一下吧。</p><h3 id="miniconda镜像">Miniconda 镜像</h3><p>Miniconda 是一个 Anaconda 的轻量级替代,默认只包含了 python 和 conda,但是可以通过 pip 和 conda 来安装所需要的包。</p><p>Miniconda 安装包可以到 https://mirrors.tuna.tsinghua.edu.cn/anaconda/miniconda/ 下载。</p>]]></content>
<summary type="html">
Python环境管理工具
</summary>
<category term="Python" scheme="https://www.starlg.cn/categories/Python/"/>
</entry>
<entry>
<title>[MLIA] Logistic Regression</title>
<link href="https://www.starlg.cn/2018/09/05/MLIA-Logistic-Regression/"/>
<id>https://www.starlg.cn/2018/09/05/MLIA-Logistic-Regression/</id>
<published>2018-09-05T14:45:58.000Z</published>
<updated>2022-05-30T14:16:36.000Z</updated>
<content type="html"><![CDATA[<h1 id="logistic-regression">Logistic Regression</h1><p>本代码来自Machine Learning in Action。</p><p>想要了解更多的朋友可以参考此书。</p><h2 id="sigmoid函数">Sigmoid函数</h2><p><span class="math display">\[\sigma(z) = \frac{1}{(1+e^{-z})}\]</span></p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> numpy <span class="keyword">as</span> np</span><br><span class="line"><span class="keyword">import</span> matplotlib.pyplot <span class="keyword">as</span> plt</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">sigmoid</span><span class="params">(inX)</span>:</span></span><br><span class="line"> <span class="keyword">return</span> <span class="number">1.0</span>/(<span class="number">1</span>+np.exp(-inX))</span><br><span class="line"></span><br><span class="line">z = np.linspace(<span class="number">-5</span>, <span class="number">5</span>, <span class="number">100</span>)</span><br><span class="line">y = sigmoid(z)</span><br><span class="line">plt.plot(z, y)</span><br><span class="line">plt.show()</span><br></pre></td></tr></table></figure><figure><img src="output_2_0.png" alt="png"><figcaption>png</figcaption></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">z = np.linspace(<span class="number">-60</span>, <span class="number">60</span>, <span class="number">100</span>)</span><br><span class="line">y = sigmoid(z)</span><br><span class="line">plt.plot(z, y)</span><br><span class="line">plt.show()</span><br></pre></td></tr></table></figure><figure><img src="output_3_0.png" alt="png"><figcaption>png</figcaption></figure><p>Sigmoid函数类似一个单位阶跃函数。当x=0时,Sigmoid函数值为0.5;随着x增大,Sigmoid函数值将逼近于1;随着x减小,Sigmoid函数将逼近于0。利用这个性质可以对它的输入做一个二分类。</p><p>为了实现Logistic回归分类器,我们可以在每个特征上都乘以一个回归系数,然后把它的所有的结果值相加,将这个总和带入Sigmoid函数中,进而得到一个范围在0~1之间是数值。当大于0.5的时候,将数据分类为1;当小于0.5的时候,将数据分类为0。</p><p>Sigmoid函数的输入记为z:</p><p><span class="math display">\[z=w_0x_0 + w_1x_1 + w_2x_2 + \cdot \cdot \cdot + w_n x_n\]</span></p><h2 id="sigmoid函数的导数">Sigmoid函数的导数</h2><p>Sigmoid导数具体推导过程如下:</p><p><span class="math display">\[\begin{align} f^{\prime}(z) &= (\frac{1}{1+e^{-z}})^{\prime}\\\&=\frac{e^{-z}}{(1+e^{-z})^2}\\\&=\frac{1+e^{-z}-1}{(1+e^{-z})^2}\\\&=\frac{1}{(1+e^{-z})}(1-\frac{1}{(1+e^{-z})})\\\&=f(z)(1-f(z))\end{align}\]</span></p><h2 id="梯度上升法">梯度上升法</h2><p>梯度上升法:顾名思义就是利用梯度方向,寻找到某函数的最大值。</p><p>梯度上升算法迭代公式: <span class="math display">\[w:=w+\alpha \nabla_w f(w)\]</span></p><p>梯度下降法:和梯度上升想法,利用梯度方法,寻找某个函数的最小值。 梯度下降算法迭代公式: <span class="math display">\[w:=w-\alpha \nabla_w f(w)\]</span></p><p><img src="./Fig5_2.png"></p><p>梯度上升算法每次更新之后,都会重新估计移动的方法,即梯度。</p><h2 id="logistic-回归梯度上升优化法">Logistic 回归梯度上升优化法</h2><h3 id="加载数据">加载数据</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">loadDataSet</span><span class="params">()</span>:</span></span><br><span class="line"> dataMat = []; labelMat = []</span><br><span class="line"> fr = open(<span class="string">'testSet.txt'</span>)</span><br><span class="line"> <span class="keyword">for</span> line <span class="keyword">in</span> fr.readlines():</span><br><span class="line"> lineArr = line.strip().split()</span><br><span class="line"> dataMat.append([<span class="number">1.0</span>, float(lineArr[<span class="number">0</span>]), float(lineArr[<span class="number">1</span>])])</span><br><span class="line"> labelMat.append(int(lineArr[<span class="number">2</span>]))</span><br><span class="line"> <span class="keyword">return</span> dataMat,labelMat</span><br></pre></td></tr></table></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">dataArray, labelMat = loadDataSet()</span><br><span class="line">print(<span class="string">"Total: "</span>, len(dataArray))</span><br><span class="line">print(<span class="string">"The first sample: "</span>, dataArray[<span class="number">0</span>])</span><br><span class="line">print(<span class="string">"The second sample: "</span>, dataArray[<span class="number">1</span>])</span><br><span class="line">print(<span class="string">"Label: "</span>, labelMat)</span><br></pre></td></tr></table></figure><pre><code>('Total: ', 100)('The first sample: ', [1.0, -0.017612, 14.053064])('The second sample: ', [1.0, -1.395634, 4.662541])('Label: ', [0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0])</code></pre><h3 id="数据集梯度上升">数据集梯度上升</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">sigmoid</span><span class="params">(inX)</span>:</span></span><br><span class="line"> <span class="keyword">return</span> <span class="number">1.0</span>/(<span class="number">1</span>+np.exp(-inX))</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">gradAscent</span><span class="params">(dataMatIn, classLabels)</span>:</span></span><br><span class="line"> dataMatrix = np.mat(dataMatIn) <span class="comment">#convert to NumPy matrix</span></span><br><span class="line"> labelMat = np.mat(classLabels).transpose() <span class="comment">#convert to NumPy matrix</span></span><br><span class="line"> m,n = np.shape(dataMatrix)</span><br><span class="line"> alpha = <span class="number">0.001</span></span><br><span class="line"> maxCycles = <span class="number">500</span></span><br><span class="line"> weights = np.ones((n,<span class="number">1</span>))</span><br><span class="line"> <span class="keyword">for</span> k <span class="keyword">in</span> range(maxCycles): <span class="comment">#heavy on matrix operations</span></span><br><span class="line"> h = sigmoid(dataMatrix*weights) <span class="comment">#matrix mult</span></span><br><span class="line"> error = (labelMat - h) <span class="comment">#vector subtraction</span></span><br><span class="line"> weights = weights + alpha * dataMatrix.transpose()* error <span class="comment">#matrix mult</span></span><br><span class="line"> <span class="keyword">return</span> weights</span><br></pre></td></tr></table></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">gradAscent(dataArray, labelMat)</span><br></pre></td></tr></table></figure><pre><code>matrix([[ 4.12414349], [ 0.48007329], [-0.6168482 ]])</code></pre><h3 id="绘制数据和决策边界">绘制数据和决策边界</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">plotBestFit</span><span class="params">(weights)</span>:</span></span><br><span class="line"> <span class="keyword">import</span> matplotlib.pyplot <span class="keyword">as</span> plt</span><br><span class="line"> dataMat,labelMat=loadDataSet()</span><br><span class="line"> dataArr = np.array(dataMat)</span><br><span class="line"> n = np.shape(dataArr)[<span class="number">0</span>] </span><br><span class="line"> xcord1 = []; ycord1 = []</span><br><span class="line"> xcord2 = []; ycord2 = []</span><br><span class="line"> <span class="keyword">for</span> i <span class="keyword">in</span> range(n):</span><br><span class="line"> <span class="keyword">if</span> int(labelMat[i])== <span class="number">1</span>:</span><br><span class="line"> xcord1.append(dataArr[i,<span class="number">1</span>]); ycord1.append(dataArr[i,<span class="number">2</span>])</span><br><span class="line"> <span class="keyword">else</span>:</span><br><span class="line"> xcord2.append(dataArr[i,<span class="number">1</span>]); ycord2.append(dataArr[i,<span class="number">2</span>])</span><br><span class="line"> fig = plt.figure()</span><br><span class="line"> ax = fig.add_subplot(<span class="number">111</span>)</span><br><span class="line"> ax.scatter(xcord1, ycord1, s=<span class="number">30</span>, c=<span class="string">'red'</span>, marker=<span class="string">'s'</span>)</span><br><span class="line"> ax.scatter(xcord2, ycord2, s=<span class="number">30</span>, c=<span class="string">'green'</span>)</span><br><span class="line"> x = np.arange(<span class="number">-3.0</span>, <span class="number">3.0</span>, <span class="number">0.1</span>)</span><br><span class="line"> y = (-weights[<span class="number">0</span>]-weights[<span class="number">1</span>]*x)/weights[<span class="number">2</span>]</span><br><span class="line"> ax.plot(x, y)</span><br><span class="line"> plt.xlabel(<span class="string">'X1'</span>); plt.ylabel(<span class="string">'X2'</span>);</span><br><span class="line"> plt.show()</span><br></pre></td></tr></table></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">weights = gradAscent(dataArray, labelMat)</span><br><span class="line">plotBestFit(weights.getA())</span><br></pre></td></tr></table></figure><figure><img src="output_17_0.png" alt="png"><figcaption>png</figcaption></figure><h2 id="个epoch的随机梯度上升">1个epoch的随机梯度上升</h2><p>梯度上升算法在每次更新系数的时候都需要便利整个数据集,如果数据集的样本比较大,该方法的复杂度和计算代价就很高。有一种改进的方法叫做随机梯度上升方法。该方法的思想是选取一个样本,计算该样本的梯度,更新系数,再选取下一个样本。</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">stocGradAscent0</span><span class="params">(dataMatrix, classLabels)</span>:</span></span><br><span class="line"> m,n = np.shape(dataMatrix)</span><br><span class="line"> alpha = <span class="number">0.01</span></span><br><span class="line"> weights = np.ones(n) <span class="comment">#initialize to all ones</span></span><br><span class="line"> <span class="keyword">for</span> i <span class="keyword">in</span> range(m):</span><br><span class="line"> h = sigmoid(sum(dataMatrix[i]*weights))</span><br><span class="line"> error = classLabels[i] - h</span><br><span class="line"> weights = weights + alpha * error * dataMatrix[i]</span><br><span class="line"> <span class="keyword">return</span> weights</span><br></pre></td></tr></table></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">weights = stocGradAscent0(np.array(dataArray), labelMat)</span><br><span class="line">plotBestFit(weights)</span><br></pre></td></tr></table></figure><figure><img src="output_20_0.png" alt="png"><figcaption>png</figcaption></figure><p>上图之后遍历了一次数据集,这样的模型还处于欠拟合状态。需要多次遍历数据集才能优化好模型,接下来我们会运行200次迭代。</p><h2 id="个epoch的随机梯度上升-1">200个epoch的随机梯度上升</h2><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">stocGradAscent0</span><span class="params">(dataMatrix, classLabels)</span>:</span></span><br><span class="line"> X0, X1, X2 = [], [], []</span><br><span class="line"> m,n = np.shape(dataMatrix)</span><br><span class="line"> alpha = <span class="number">0.01</span></span><br><span class="line"> weights = np.ones(n) <span class="comment">#initialize to all ones</span></span><br><span class="line"> <span class="keyword">for</span> j <span class="keyword">in</span> range(<span class="number">200</span>):</span><br><span class="line"> <span class="keyword">for</span> i <span class="keyword">in</span> range(m):</span><br><span class="line"> h = sigmoid(sum(dataMatrix[i]*weights))</span><br><span class="line"> error = classLabels[i] - h</span><br><span class="line"> weights = weights + alpha * error * dataMatrix[i]</span><br><span class="line"> X0.append(weights[<span class="number">0</span>])</span><br><span class="line"> X1.append(weights[<span class="number">1</span>])</span><br><span class="line"> X2.append(weights[<span class="number">2</span>])</span><br><span class="line"> <span class="keyword">return</span> weights, X0, X1, X2</span><br></pre></td></tr></table></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">weights, X0, X1, X2 = stocGradAscent0(np.array(dataArray), labelMat)</span><br><span class="line">plotBestFit(weights)</span><br></pre></td></tr></table></figure><figure><img src="output_24_0.png" alt="png"><figcaption>png</figcaption></figure><h3 id="可视化权重weights的变化">可视化权重(weights)的变化</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">fig, ax = plt.subplots(<span class="number">3</span>, <span class="number">1</span>, figsize=(<span class="number">10</span>, <span class="number">5</span>))</span><br><span class="line">ax[<span class="number">0</span>].plot(np.arange(len(X0)), np.array(X0))</span><br><span class="line">ax[<span class="number">1</span>].plot(np.arange(len(X1)), np.array(X1))</span><br><span class="line">ax[<span class="number">2</span>].plot(np.arange(len(X2)), np.array(X2))</span><br><span class="line">plt.show()</span><br></pre></td></tr></table></figure><figure><img src="output_26_0.png" alt="png"><figcaption>png</figcaption></figure><p>从上图可以看出,算法正在逐渐收敛。由于数据集并不是线性可分的,所以存在一些不能正确分类的样本点,每次更新权重引起了周期的变化。</p><h2 id="更新过后的随机梯度上升算法">更新过后的随机梯度上升算法</h2><ol type="1"><li>学习率alpha会在每次迭代之后调整。</li><li>采用随机选取样本的更新策略,减少周期性的波动。</li></ol><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">stocGradAscent1</span><span class="params">(dataMatrix, classLabels, numIter=<span class="number">150</span>)</span>:</span></span><br><span class="line"> X0, X1, X2 = [], [], []</span><br><span class="line"> m,n = np.shape(dataMatrix)</span><br><span class="line"> weights = np.ones(n) <span class="comment">#initialize to all ones</span></span><br><span class="line"> <span class="keyword">for</span> j <span class="keyword">in</span> range(numIter):</span><br><span class="line"> dataIndex = range(m)</span><br><span class="line"> <span class="keyword">for</span> i <span class="keyword">in</span> range(m):</span><br><span class="line"> alpha = <span class="number">4</span>/(<span class="number">1.0</span>+j+i)+<span class="number">0.0001</span> <span class="comment">#apha decreases with iteration, does not </span></span><br><span class="line"> randIndex = int(np.random.uniform(<span class="number">0</span>,len(dataIndex)))<span class="comment">#go to 0 because of the constant</span></span><br><span class="line"> h = sigmoid(sum(dataMatrix[randIndex]*weights))</span><br><span class="line"> error = classLabels[randIndex] - h</span><br><span class="line"> weights = weights + alpha * error * dataMatrix[randIndex]</span><br><span class="line"> X0.append(weights[<span class="number">0</span>])</span><br><span class="line"> X1.append(weights[<span class="number">1</span>])</span><br><span class="line"> X2.append(weights[<span class="number">2</span>])</span><br><span class="line"> <span class="keyword">del</span>(dataIndex[randIndex])</span><br><span class="line"> <span class="keyword">return</span> weights, X0, X1, X2</span><br></pre></td></tr></table></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">weights, X0, X1, X2 = stocGradAscent1(np.array(dataArray), labelMat)</span><br><span class="line">plotBestFit(weights)</span><br></pre></td></tr></table></figure><figure><img src="output_30_0.png" alt="png"><figcaption>png</figcaption></figure><h3 id="可视化权重weights的变化-1">可视化权重(weights)的变化</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">fig, ax = plt.subplots(<span class="number">3</span>, <span class="number">1</span>, figsize=(<span class="number">10</span>, <span class="number">5</span>))</span><br><span class="line">ax[<span class="number">0</span>].plot(np.arange(len(X0)), np.array(X0))</span><br><span class="line">ax[<span class="number">1</span>].plot(np.arange(len(X1)), np.array(X1))</span><br><span class="line">ax[<span class="number">2</span>].plot(np.arange(len(X2)), np.array(X2))</span><br><span class="line">plt.show()</span><br></pre></td></tr></table></figure><figure><img src="output_32_0.png" alt="png"><figcaption>png</figcaption></figure><h1 id="示例从疝气病症预测病马的死亡率">示例:从疝气病症预测病马的死亡率</h1><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">classifyVector</span><span class="params">(inX, weights)</span>:</span></span><br><span class="line"> prob = sigmoid(sum(inX*weights))</span><br><span class="line"> <span class="keyword">if</span> prob > <span class="number">0.5</span>: <span class="keyword">return</span> <span class="number">1.0</span></span><br><span class="line"> <span class="keyword">else</span>: <span class="keyword">return</span> <span class="number">0.0</span></span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">colicTest</span><span class="params">()</span>:</span></span><br><span class="line"> frTrain = open(<span class="string">'horseColicTraining.txt'</span>, <span class="string">'r'</span>); frTest = open(<span class="string">'horseColicTest.txt'</span>, <span class="string">'r'</span>)</span><br><span class="line"> trainingSet = []; trainingLabels = []</span><br><span class="line"> <span class="keyword">for</span> line <span class="keyword">in</span> frTrain.readlines():</span><br><span class="line"> currLine = line.strip().split(<span class="string">'\t'</span>)</span><br><span class="line"> lineArr =[]</span><br><span class="line"> <span class="keyword">for</span> i <span class="keyword">in</span> range(<span class="number">21</span>):</span><br><span class="line"> lineArr.append(float(currLine[i]))</span><br><span class="line"> trainingSet.append(lineArr)</span><br><span class="line"> trainingLabels.append(float(currLine[<span class="number">21</span>]))</span><br><span class="line"> trainWeights, X0, X1, X2 = stocGradAscent1(np.array(trainingSet), trainingLabels, <span class="number">1000</span>)</span><br><span class="line"> errorCount = <span class="number">0</span>; numTestVec = <span class="number">0.0</span></span><br><span class="line"> <span class="keyword">for</span> line <span class="keyword">in</span> frTest.readlines():</span><br><span class="line"> numTestVec += <span class="number">1.0</span></span><br><span class="line"> currLine = line.strip().split(<span class="string">'\t'</span>)</span><br><span class="line"> lineArr =[]</span><br><span class="line"> <span class="keyword">for</span> i <span class="keyword">in</span> range(<span class="number">21</span>):</span><br><span class="line"> lineArr.append(float(currLine[i]))</span><br><span class="line"> <span class="keyword">if</span> int(classifyVector(np.array(lineArr), trainWeights))!= int(currLine[<span class="number">21</span>]):</span><br><span class="line"> errorCount += <span class="number">1</span></span><br><span class="line"> errorRate = (float(errorCount)/numTestVec)</span><br><span class="line"> <span class="keyword">print</span> <span class="string">"the error rate of this test is: %f"</span> % errorRate</span><br><span class="line"> <span class="keyword">return</span> errorRate</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">multiTest</span><span class="params">()</span>:</span></span><br><span class="line"> numTests = <span class="number">10</span>; errorSum=<span class="number">0.0</span></span><br><span class="line"> <span class="keyword">for</span> k <span class="keyword">in</span> range(numTests):</span><br><span class="line"> errorSum += colicTest()</span><br><span class="line"> <span class="keyword">print</span> <span class="string">"after %d iterations the average error rate is: %f"</span> % (numTests, errorSum/float(numTests))</span><br></pre></td></tr></table></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">multiTest()</span><br></pre></td></tr></table></figure><pre><code>/home/tianliang/anaconda2/lib/python2.7/site-packages/ipykernel_launcher.py:2: RuntimeWarning: overflow encountered in exp the error rate of this test is: 0.328358the error rate of this test is: 0.432836the error rate of this test is: 0.388060the error rate of this test is: 0.373134the error rate of this test is: 0.373134the error rate of this test is: 0.447761the error rate of this test is: 0.343284the error rate of this test is: 0.313433the error rate of this test is: 0.328358the error rate of this test is: 0.462687after 10 iterations the average error rate is: 0.379104</code></pre>]]></content>
<summary type="html">
Logistic Regression And Code
</summary>
<category term="Machine Learning" scheme="https://www.starlg.cn/categories/Machine-Learning/"/>
</entry>
<entry>
<title>CornerNet: Detection Objects as Paired Keypoints</title>
<link href="https://www.starlg.cn/2018/09/02/CornerNet-Detection-Objects-as-Paired-Keypoints/"/>
<id>https://www.starlg.cn/2018/09/02/CornerNet-Detection-Objects-as-Paired-Keypoints/</id>
<published>2018-09-02T14:08:13.000Z</published>
<updated>2022-05-30T14:16:36.000Z</updated>
<content type="html"><![CDATA[<h3 id="前言">前言</h3><p>CornerNet: Detection Objects as Paired Keypoints 这篇论文发表在ECCV2018,本人感觉非常有意思,所以和大家分享一下。</p><p>Arxiv: https://arxiv.org/abs/1808.01244 Github: https://github.com/umich-vl/</p><img src="/2018/09/02/CornerNet-Detection-Objects-as-Paired-Keypoints/Fig1.png" title="We detect an object as a pair of bounding box corners grouped together."><img src="/2018/09/02/CornerNet-Detection-Objects-as-Paired-Keypoints/Fig2.png" title="Often there is no local evidence to determine the location of a bounding box corner. We address this issue by proposing a new type of pooling layer."><img src="/2018/09/02/CornerNet-Detection-Objects-as-Paired-Keypoints/Fig3.png" title="Corner pooling: for each channel, we take the maximum values (red dots) in two directions (red lines), each from a separate feature map, and add the two maximums together (blue dot)."><img src="/2018/09/02/CornerNet-Detection-Objects-as-Paired-Keypoints/Fig4.png" title="Overview of CornerNet. The backbone network is followed by two prediction modules, one for the top-left corners and the other for the bottom-right corners. Using the predictions from both modules, we locate and group the corners."><img src="/2018/09/02/CornerNet-Detection-Objects-as-Paired-Keypoints/Fig5.png" title="“Ground-truth” heatmaps for training."><img src="/2018/09/02/CornerNet-Detection-Objects-as-Paired-Keypoints/Fig6.png" title="The top-left corner pooling layer can be implemented very efficiently. We scan from left to right for the horizontal max-pooling and from bottom to top for the vertical max-pooling. We then add two max-pooled feature maps."><img src="/2018/09/02/CornerNet-Detection-Objects-as-Paired-Keypoints/Fig7.png" title="The prediction module starts with a modified residual block, in which we replace the first convolution module with our corner pooling module. The modified residual block is then followed by a convolution module. We have multiple branches for predict- ing the heatmaps, embeddings and offsets"><img src="/2018/09/02/CornerNet-Detection-Objects-as-Paired-Keypoints/Fig8.png" title="Example bounding box predictions overlaid on predicted heatmaps of corners.">]]></content>
<summary type="html">
<h3 id="前言">前言</h3>
<p>CornerNet: Detection Objects as Paired Keypoints 这篇论文发表在ECCV2018,本人感觉非常有意思,所以和大家分享一下。</p>
<p>Arxiv: https://arxiv.o
</summary>
<category term="Object Detection" scheme="https://www.starlg.cn/categories/Object-Detection/"/>
<category term="Object Detection" scheme="https://www.starlg.cn/tags/Object-Detection/"/>
</entry>
<entry>
<title>行人检测(Pedestrian Detection)论文整理</title>
<link href="https://www.starlg.cn/2018/08/17/Pedestrian-Detection-Sources/"/>
<id>https://www.starlg.cn/2018/08/17/Pedestrian-Detection-Sources/</id>
<published>2018-08-17T03:26:22.000Z</published>
<updated>2022-05-30T14:16:36.000Z</updated>
<content type="html"><![CDATA[<h2 id="同步github地址">同步github地址</h2><p>本文档同步至github:<a href="https://github.com/xingkongliang/Pedestrian-Detection" target="_blank" rel="noopener">here</a></p><h2 id="相关科研工作者">相关科研工作者</h2><ul><li><a href="https://scholar.google.com/citations?user=a8Y2OJMAAAAJ&hl=zh-CN" target="_blank" rel="noopener">Piotr Dollár scholar</a></li><li><a href="https://pdollar.github.io/" target="_blank" rel="noopener">Piotr Dollár homepage</a></li><li><a href="https://scholar.google.com/citations?hl=zh-CN&user=pOSMWfQAAAAJ&view_op=list_works&sortby=pubdate" target="_blank" rel="noopener">张姗姗 scholar</a></li><li><a href="https://sites.google.com/site/shanshanzhangshomepage/" target="_blank" rel="noopener">张姗姗 homepage</a></li><li><a href="https://scholar.google.com/citations?user=pw_0Z_UAAAAJ&%20hl=en" target="_blank" rel="noopener">欧阳万里 scholar</a></li><li><a href="http://www.ee.cuhk.edu.hk/~wlouyang/" target="_blank" rel="noopener">欧阳万里 homepage</a></li><li><a href="https://liuwei16.github.io/" target="_blank" rel="noopener">Liu Wei homepage</a></li></ul><h2 id="开放的代码">开放的代码</h2><ul><li><p><a href="https://github.com/Leotju/MGAN" target="_blank" rel="noopener"><strong>Leotju/MGAN</strong></a> [ICCV-2019] Mask-Guided Attention Network for Occluded Pedestrian Detection [<a href="http://openaccess.thecvf.com/content_ICCV_2019/papers/Pang_Mask-Guided_Attention_Network_for_Occluded_Pedestrian_Detection_ICCV_2019_paper.pdf" target="_blank" rel="noopener">paper</a>]</p></li><li><p><a href="https://github.com/lw396285v/CSP-pedestrian-detection-in-pytorch" target="_blank" rel="noopener"><strong>lw396285v/CSP-pedestrian-detection-in-pytorch 非官方实现</strong></a> [CVPR-2019] High-level Semantic Feature Detection:A New Perspective for Pedestrian Detection [<a href="https://arxiv.org/abs/1904.02948" target="_blank" rel="noopener">paper</a>]</p></li><li><p><a href="https://github.com/liuwei16/CSP" target="_blank" rel="noopener"><strong>liuwei16/CSP</strong></a> [CVPR-2019] High-level Semantic Feature Detection:A New Perspective for Pedestrian Detection [<a href="https://arxiv.org/abs/1904.02948" target="_blank" rel="noopener">paper</a>]</p></li><li><p><a href="https://github.com/liuwei16/ALFNet" target="_blank" rel="noopener"><strong>liuwei16/ALFNet</strong></a> [ECCV-2018] Learning Efficient Single-stage Pedestrian Detectors by Asymptotic Localization Fitting</p></li><li><p><a href="https://github.com/rainofmine/Bi-box_Regression" target="_blank" rel="noopener"><strong>rainofmine/Bi-box_Regression 非官方实现</strong></a> [ECCV-2018] Bi-box Regression for Pedestrian Detection and Occlusion Estimation</p></li><li><p><a href="https://github.com/rainofmine/Repulsion_Loss" target="_blank" rel="noopener"><strong>rainofmine/Repulsion_Loss 非官方实现</strong></a> [CVPR-2018] Repulsion Loss: Detecting Pedestrians in a Crowd</p></li><li><p><a href="https://github.com/garrickbrazil/SDS-RCNN" target="_blank" rel="noopener"><strong>garrickbrazil/SDS-RCNN</strong></a> [ICCV-2017] Illuminating Pedestrians via Simultaneous Detection & Segmentation</p></li><li><p><a href="https://github.com/zhangliliang/RPN_BF" target="_blank" rel="noopener"><strong>zhangliliang/RPN_BF</strong></a> [ECCV-2016] Is Faster R-CNN Doing Well for Pedestrian Detection?</p></li></ul><h2 id="paper-list">Paper List</h2><ul><li>[ICCV-2019] Semi-Supervised Pedestrian Instance Synthesis and Detection With Mutual Reinforcement [<a href="http://openaccess.thecvf.com/content_ICCV_2019/papers/Wu_Semi-Supervised_Pedestrian_Instance_Synthesis_and_Detection_With_Mutual_Reinforcement_ICCV_2019_paper.pdf" target="_blank" rel="noopener">paper</a>]</li><li>[ICCV-2019] Weakly Aligned Cross-Modal Learning for Multispectral Pedestrian Detection [<a href="http://openaccess.thecvf.com/content_ICCV_2019/papers/Zhang_Weakly_Aligned_Cross-Modal_Learning_for_Multispectral_Pedestrian_Detection_ICCV_2019_paper.pdf" target="_blank" rel="noopener">paper</a>]</li><li>[ICCV-2019] Discriminative Feature Transformation for Occluded Pedestrian Detection [<a href="http://openaccess.thecvf.com/content_ICCV_2019/papers/Zhou_Discriminative_Feature_Transformation_for_Occluded_Pedestrian_Detection_ICCV_2019_paper.pdf" target="_blank" rel="noopener">paper</a>]</li><li>[ICCV-2019] Mask-Guided Attention Network for Occluded Pedestrian Detection [<a href="http://openaccess.thecvf.com/content_ICCV_2019/papers/Pang_Mask-Guided_Attention_Network_for_Occluded_Pedestrian_Detection_ICCV_2019_paper.pdf" target="_blank" rel="noopener">paper</a>] [<a href="https://github.com/Leotju/MGAN" target="_blank" rel="noopener"><strong>code</strong></a>]</li><li>[TPAMI-2019] EuroCity Persons: A Novel Benchmark for Person Detection in Traffic Scenes [<a href="http://intelligent-vehicles.org/wp-content/uploads/2019/04/braun2019tpami_eurocity_persons.pdf" target="_blank" rel="noopener">paper</a>]</li><li>[CVPR-2019 oral] Adaptive NMS: Refining Pedestrian Detection in a Crowd [<a href="https://arxiv.org/abs/1904.02948" target="_blank" rel="noopener">paper</a>]</li><li>[CVPR-2019] High-level Semantic Feature Detection:A New Perspective for Pedestrian Detection [<a href="https://arxiv.org/abs/1904.02948" target="_blank" rel="noopener">paper</a>] [<a href="https://github.com/liuwei16/CSP" target="_blank" rel="noopener"><strong>code</strong></a>]</li><li>[CVPR-2019] SSA-CNN: Semantic Self-Attention CNN for Pedestrian Detection</li><li>[CVPR-2019] Pedestrian Detection in Thermal Images using Saliency Maps</li><li>[TIP-2018] Too Far to See? Not Really:- Pedestrian Detection with Scale-Aware Localization Policy</li><li>[ECCV-2018] Bi-box Regression for Pedestrian Detection and Occlusion Estimation [<a href="https://github.com/rainofmine/Bi-box_Regression" target="_blank" rel="noopener"><strong>code</strong></a>]</li><li>[ECCV-2018] Learning Efficient Single-stage Pedestrian Detectors by Asymptotic Localization Fitting [<a href="https://github.com/liuwei16/ALFNet" target="_blank" rel="noopener"><strong>code</strong></a>]</li><li>[ECCV-2018] Graininess-Aware Deep Feature Learning for Pedestrian Detection</li><li>[ECCV-2018] Occlusion-aware R-CNN: Detecting Pedestrians in a Crowd</li><li>[ECCV-2018] Small-scale Pedestrian Detection Based on Somatic Topology Localization and Temporal Feature Aggregation</li><li>[CVPR-2018] Improving Occlusion and Hard Negative Handling for Single-Stage Pedestrian Detectors</li><li>[CVPR-2018] Occluded Pedestrian Detection Through Guided Attention in CNNs</li><li>[CVPR-2018] Repulsion Loss: Detecting Pedestrians in a Crowd [<a href="https://github.com/rainofmine/Repulsion_Loss" target="_blank" rel="noopener"><strong>code</strong></a>]</li><li>[TCSVT-2018] Pushing the Limits of Deep CNNs for Pedestrian Detection</li><li>[Trans Multimedia-2018] Scale-aware Fast R-CNN for Pedestrian Detection</li><li>[TPAMI-2017] Jointly Learning Deep Features, Deformable Parts, Occlusion and Classification for Pedestrian Detection</li><li>[BMVC-2017] PCN: Part and Context Information for Pedestrian Detection with CNNs</li><li>[CVPR-2017] CityPersons: A Diverse Dataset for Pedestrian Detection</li><li>[CVPR-2017] Learning Cross-Modal Deep Representations for Robust Pedestrian Detection</li><li>[CVPR-2017] What Can Help Pedestrian Detection?</li><li>[ICCV-2017] Multi-label Learning of Part Detectors for Heavily Occluded Pedestrian Detection</li><li>[ICCV-2017] Illuminating Pedestrians via Simultaneous Detection & Segmentation [<a href="https://github.com/garrickbrazil/SDS-RCNN" target="_blank" rel="noopener"><strong>code</strong></a>]</li><li>[TPAMI-2017] Towards Reaching Human Performance in Pedestrian Detection</li><li>[Transactions on Multimedia-2017] Scale-Aware Fast R-CNN for Pedestrian Detection</li><li>[CVPR-2016] Semantic Channels for Fast Pedestrian Detection</li><li>[CVPR-2016] How Far are We from Solving Pedestrian Detection?</li><li>[CVPR-2016] Pedestrian Detection Inspired by Appearance Constancy and Shape Symmetry</li><li>[CVPR-2016] Semantic Channels for Fast Pedestrian Detection</li><li>[ECCV-2016] Is Faster R-CNN Doing Well for Pedestrian Detection? [<a href="https://github.com/zhangliliang/RPN_BF" target="_blank" rel="noopener"><strong>code</strong></a>]</li><li>[CVPR-2015] Taking a Deeper Look at Pedestrians</li><li>[ICCV-2015] Learning Complexity-Aware Cascades for Deep Pedestrian Detection</li><li>[ICCV-2015] Deep Learning Strong Parts for Pedestrian Detection</li><li>[ECCV-2014] Deep Learning of Scene-specific Classifier for Pedestrian Detection</li><li>[CVPR-2013] Joint Deep Learning for Pedestrian Detection</li><li>[CVPR-2012] A Discriminative Deep Model for Pedestrian Detection with Occlusion Handling</li><li>[CVPR-2010] Multi-Cue Pedestrian Classification With Partial Occlusion Handling</li><li>[CVPR-2009] Pedestrian detection: A benchmark</li><li>[CVPR-2008] People-Tracking-by-Detection and People-Detection-by-Tracking</li><li>[ECCV-2006] Human Detection Using Oriented Histograms of Flow and Appearance</li><li>[CVPR-2005] Histograms of Oriented Gradients for Human Detection</li></ul><h2 id="论文">论文</h2><h3 id="cvpr-2019-oral-adaptive-nms-refining-pedestrian-detection-in-a-crowd">[CVPR-2019 oral] Adaptive NMS: Refining Pedestrian Detection in a Crowd</h3><p><img src="./CVPR19_CSP_Adaptive_NMS.png" alt="CVPR19_CSP_Adaptive_NMS"> - paper: https://arxiv.org/abs/1904.02948</p><h3 id="cvpr-2019-high-level-semantic-feature-detectiona-new-perspective-for-pedestrian-detection">[CVPR-2019] High-level Semantic Feature Detection:A New Perspective for Pedestrian Detection</h3><p><img src="./CVPR19_CSP_PedestrianDetection.png" alt="Alt text"> - paper: https://arxiv.org/abs/1904.02948 - github: https://github.com/liuwei16/CSP</p><h3 id="cvpr-2019-ssa-cnn-semantic-self-attention-cnn-for-pedestrian-detection">[CVPR-2019] SSA-CNN: Semantic Self-Attention CNN for Pedestrian Detection</h3><p><img src="./CVPR19_SSA-CNN.png" alt="Alt text"> - paper: https://arxiv.org/abs/1902.09080v1</p><h3 id="cvpr-2019-pedestrian-detection-in-thermal-images-using-saliency-maps">[CVPR-2019] Pedestrian Detection in Thermal Images using Saliency Maps</h3><ul><li>paper: https://arxiv.org/abs/1904.06859</li></ul><h3 id="tip-2018-too-far-to-see-not-really-pedestrian-detection-with-scale-aware-localization-policy">[TIP-2018] Too Far to See? Not Really: Pedestrian Detection with Scale-Aware Localization Policy</h3><p><img src="./1533980426553.png" alt="Alt text| left | 300x0"> - paper: - project website: - slides: - github:</p><h3 id="transactions-on-multimedia-201-scale-aware-fast-r-cnn-for-pedestrian-detection">[Transactions on Multimedia-2018] Scale-Aware Fast R-CNN for Pedestrian Detection</h3><figure><img src="./1533980383783.png" alt="Alt text| left | 300x0"><figcaption>Alt text| left | 300x0</figcaption></figure><ul><li>paper: https://ieeexplore.ieee.org/abstract/document/8060595/</li><li>project website:</li><li>slides:</li><li>github:</li></ul><h3 id="eccv-2018-bi-box-regression-for-pedestrian-detection-and-occlusion-estimation">[ECCV-2018] Bi-box Regression for Pedestrian Detection and Occlusion Estimation</h3><p><img src="./ECCV2018-Bi-box_Regression_2.png" alt="Alt text| left | 300x0"> <img src="./ECCV2018-Bi-box_Regression.png" alt="Alt text| left | 300x0"></p><ul><li>arxiv:</li><li>paper:http://openaccess.thecvf.com/content_ECCV_2018/papers/CHUNLUAN_ZHOU_Bi-box_Regression_for_ECCV_2018_paper.pdf</li><li>slides:</li><li>github: https://github.com/rainofmine/Bi-box_Regression</li></ul><h3 id="eccv-2018-learning-efficient-single-stage-pedestrian-detectors-by-asymptotic-localization-fitting">[ECCV-2018] Learning Efficient Single-stage Pedestrian Detectors by Asymptotic Localization Fitting</h3><figure><img src="./ECCV2-18-Learning_Efficien_Single-stage_Pedestrian_Detectors.png" alt="Alt text| left | 300x0"><figcaption>Alt text| left | 300x0</figcaption></figure><ul><li>arxiv:</li><li>paper:http://openaccess.thecvf.com/content_ECCV_2018/papers/Wei_Liu_Learning_Efficient_Single-stage_ECCV_2018_paper.pdf</li><li>project website:</li><li>slides:</li><li>github: https://github.com/liuwei16/ALFNet</li></ul><h3 id="eccv-2018-graininess-aware-deep-feature-learning-for-pedestrian-detection">[ECCV-2018] Graininess-Aware Deep Feature Learning for Pedestrian Detection</h3><figure><img src="./ECCV2018-Graininess-Aware_Deep_Learning.png" alt="Alt text| left | 300x0"><figcaption>Alt text| left | 300x0</figcaption></figure><ul><li>arxiv:</li><li>paper:http://openaccess.thecvf.com/content_ECCV_2018/papers/Chunze_Lin_Graininess-Aware_Deep_Feature_ECCV_2018_paper.pdf</li><li>project website:</li><li>slides:</li><li>github:</li></ul><h3 id="eccv-2018-occlusion-aware-r-cnn-detecting-pedestrians-in-a-crowd">[ECCV-2018] Occlusion-aware R-CNN: Detecting Pedestrians in a Crowd</h3><figure><img src="./ECCV2018-Occlusion-aware_R-CNN.png" alt="Alt text| left | 300x0"><figcaption>Alt text| left | 300x0</figcaption></figure><ul><li>arxiv:http://arxiv.org/abs/1807.08407</li><li>project website:</li><li>slides:</li><li>github:</li></ul><h3 id="eccv-2018-small-scale-pedestrian-detection-based-on-somatic-topology-localization-and-temporal-feature-aggregation">[ECCV-2018] Small-scale Pedestrian Detection Based on Somatic Topology Localization and Temporal Feature Aggregation</h3><figure><img src="./1533979932529.png" alt="Alt text| left | 300x0"><figcaption>Alt text| left | 300x0</figcaption></figure><ul><li>arxiv:https://arxiv.org/abs/1807.01438</li><li>project website:</li><li>slides:</li><li>github:</li></ul><h3 id="cvpr-2018-improving-occlusion-and-hard-negative-handling-for-single-stage-pedestrian-detectors">[CVPR-2018] Improving Occlusion and Hard Negative Handling for Single-Stage Pedestrian Detectors</h3><figure><img src="./1533980803719.png" alt="Alt text| left | 300x0"><figcaption>Alt text| left | 300x0</figcaption></figure><ul><li>arxiv:</li><li>paper: http://vision.snu.ac.kr/projects/partgridnet/data/noh_2018.pdf</li><li>project website: http://vision.snu.ac.kr/projects/partgridnet/</li><li>slides:</li><li>github:</li></ul><h3 id="cvpr-2018-occluded-pedestrian-detection-through-guided-attention-in-cnns">[CVPR-2018] Occluded Pedestrian Detection Through Guided Attention in CNNs</h3><figure><img src="./1533980145178.png" alt="Alt text| left | 300x0"><figcaption>Alt text| left | 300x0</figcaption></figure><ul><li>arxiv:</li><li>paper: http://openaccess.thecvf.com/content_cvpr_2018/papers/Zhang_Occluded_Pedestrian_Detection_CVPR_2018_paper.pdf</li><li>project website:</li><li>slides:</li><li>github:</li></ul><h3 id="cvpr-2018-repulsion-loss-detecting-pedestrians-in-a-crowd">[CVPR-2018] Repulsion Loss: Detecting Pedestrians in a Crowd</h3><figure><img src="./1528195001788.png" alt="Alt text| left | 300x0"><figcaption>Alt text| left | 300x0</figcaption></figure><ul><li>arxiv:http://arxiv.org/abs/1711.07752</li><li>project website:</li><li>slides:</li><li>github:</li><li>blog: https://zhuanlan.zhihu.com/p/41288115</li></ul><h3 id="tpami-2017-jointly-learning-deep-features-deformable-parts-occlusion-and-classification-for-pedestrian-detection">[TPAMI-2017] Jointly Learning Deep Features, Deformable Parts, Occlusion and Classification for Pedestrian Detection</h3><figure><img src="./1537261066815.png" alt="Alt text| left | 300x0"><figcaption>Alt text| left | 300x0</figcaption></figure><ul><li>paper: https://ieeexplore.ieee.org/abstract/document/8008790/</li><li>project website:</li><li>slides:</li><li>github caffe:</li></ul><h3 id="bmvc-2017-pcn-part-and-context-information-for-pedestrian-detection-with-cnns">[BMVC-2017] PCN: Part and Context Information for Pedestrian Detection with CNNs</h3><figure><img src="./1533980559400.png" alt="Alt text| left | 300x0"><figcaption>Alt text| left | 300x0</figcaption></figure><ul><li>arxiv: https://arxiv.org/abs/1804.044838</li><li>project website:</li><li>slides:</li><li>github caffe:</li></ul><h3 id="cvpr-2017-citypersons-a-diverse-dataset-for-pedestrian-detection">[CVPR-2017] CityPersons: A Diverse Dataset for Pedestrian Detection</h3><figure><img src="./1528194369562.png" alt="Alt text| left | 300x0"><figcaption>Alt text| left | 300x0</figcaption></figure><ul><li>arxiv: http://arxiv.org/abs/1702.05693</li><li>project website:</li><li>slides:</li><li>github caffe:</li></ul><hr><h3 id="cvpr-2017-learning-cross-modal-deep-representations-for-robust-pedestrian-detection">[CVPR-2017] Learning Cross-Modal Deep Representations for Robust Pedestrian Detection</h3><figure><img src="./1528194560698.png" alt="Alt text| left | 300x0"><figcaption>Alt text| left | 300x0</figcaption></figure><ul><li>arxiv: https://arxiv.org/abs/1704.02431</li><li>project website:</li><li>slides:</li><li>github caffe:</li></ul><p><img src="./1528194591022.png" alt="Alt text"> <img src="./1528194606094.png" alt="Alt text"></p><h3 id="cvpr-2017-what-can-help-pedestrian-detection">[CVPR-2017] What Can Help Pedestrian Detection?</h3><ul><li>arxiv: https://arxiv.org/abs/1704.02431</li><li>project website:</li><li>slides:</li><li>github caffe:</li></ul><h3 id="tpami-2017-towards-reaching-human-performance-in-pedestrian-detection">[TPAMI-2017] Towards Reaching Human Performance in Pedestrian Detection</h3><ul><li>paper: http://ieeexplore.ieee.org/document/7917260/</li><li>arxiv:</li><li>project website:</li><li>slides:</li><li>github caffe:</li></ul><h3 id="iccv-2017-multi-label-learning-of-part-detectors-for-heavily-occluded-pedestrian-detection">[ICCV-2017] Multi-label Learning of Part Detectors for Heavily Occluded Pedestrian Detection</h3><ul><li>paper: http://openaccess.thecvf.com/content_ICCV_2017/papers/Zhou_Multi-Label_Learning_of_ICCV_2017_paper.pdf</li><li>arxiv:</li><li>project website:</li><li>slides:</li></ul><h3 id="iccv-2017illuminating-pedestrians-via-simultaneous-detection-segmentation">[ICCV-2017]Illuminating Pedestrians via Simultaneous Detection & Segmentation</h3><figure><img src="http://cvlab.cse.msu.edu/images/teasers/pedestrian-intro.png" alt="Alt text| left | 300x0"><figcaption>Alt text| left | 300x0</figcaption></figure><ul><li>arxiv: https://arxiv.org/abs/1706.08564</li><li>project website: http://cvlab.cse.msu.edu/project-pedestrian-detection.html</li><li>slides:</li><li>github caffe: https://github.com/garrickbrazil/SDS-RCNN</li></ul><h3 id="cvpr-2016-semantic-channels-for-fast-pedestrian-detection">[CVPR-2016] Semantic Channels for Fast Pedestrian Detection</h3><figure><img src="./1528195250768.png" alt="Alt text| left | 300x0"><figcaption>Alt text| left | 300x0</figcaption></figure><ul><li>paper: https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Costea_Semantic_Channels_for_CVPR_2016_paper.pdf</li><li>project website:</li><li>slides:</li><li>github caffe:</li></ul><h3 id="cvpr-2016-how-far-arewe-from-solving-pedestrian-detection">[CVPR-2016] How Far areWe from Solving Pedestrian Detection?</h3><ul><li>paper: https://www.cv-foundation.org/openaccess/content_cvpr_2016/app/S06-29.pdf</li><li>project website:</li><li>slides:</li><li>github caffe:</li></ul><h3 id="iccv-2015-deep-learning-strong-parts-for-pedestrian-detection">[ICCV-2015] Deep Learning Strong Parts for Pedestrian Detection</h3><figure><img src="./1537260670049.png" alt="Alt text| left | 300x0"><figcaption>Alt text| left | 300x0</figcaption></figure><ul><li>paper: https://www.cv-foundation.org/openaccess/content_iccv_2015/html/Tian_Deep_Learning_Strong_ICCV_2015_paper.htmler.html</li><li>project website:</li><li>slides:</li><li>github caffe:</li></ul><h3 id="cvpr-2013-joint-deep-learning-for-pedestrian-detection-wanli">[CVPR-2013] Joint Deep Learning for Pedestrian Detection Wanli</h3><figure><img src="./1537260505221.png" alt="Alt text| left | 300x0"><figcaption>Alt text| left | 300x0</figcaption></figure><ul><li>paper: https://www.cv-foundation.org/openaccess/content_iccv_2013/html/Ouyang_Joint_Deep_Learning_2013_ICCV_paper.html</li><li>project website:</li><li>slides:</li><li>github caffe:</li></ul><h3 id="cvpr-2012-a-discriminative-deep-model-for-pedestrian-detection-with-occlusion-handling">[CVPR-2012] A Discriminative Deep Model for Pedestrian Detection with Occlusion Handling</h3><figure><img src="./1537260310332.png" alt="Alt text| left | 300x0"><figcaption>Alt text| left | 300x0</figcaption></figure><ul><li>paper: http://mmlab.ie.cuhk.edu.hk/pdf/ouyangWcvpr2012.pdf</li><li>paper: https://ieeexplore.ieee.org/abstract/document/6248062/</li><li>project website:</li><li>slides:</li><li>github caffe:</li></ul><h3 id="cvpr-2010-multi-cue-pedestrian-classification-with-partial-occlusion-handling">[CVPR-2010] Multi-Cue Pedestrian Classification With Partial Occlusion Handling</h3><figure><img src="./1537260117170.png" alt="Alt text| left | 300x0"><figcaption>Alt text| left | 300x0</figcaption></figure><ul><li>paper: https://ieeexplore.ieee.org/abstract/document/5540111/</li><li>project website:</li><li>slides:</li><li>github caffe:</li></ul><h2 id="行人检测数据集">行人检测数据集</h2><h3 id="citypersons">CityPersons</h3><figure><img src="./1534569661113.png" alt="Alt text"><figcaption>Alt text</figcaption></figure><p>CityPersons数据集是在Cityscapes数据集基础上建立的,使用了Cityscapes数据集的数据,对一些类别进行了精确的标注。该数据集是在[CVPR-2017] CityPersons: A Diverse Dataset for Pedestrian Detection这篇论文中提出的,更多细节可以通过阅读该论文了解。</p><p>上图中左侧是行人标注,右侧是原始的CityScapes数据集。</p><ul><li><a href="https://bitbucket.org/shanshanzhang/citypersons" target="_blank" rel="noopener"><strong>标注和评估文件</strong></a></li><li><a href="https://www.cityscapes-dataset.com/" target="_blank" rel="noopener"><strong>数据集下载</strong></a></li></ul><p>文件格式</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line">#评测文件</span><br><span class="line">$/Cityscapes/shanshanzhang-citypersons/evaluation/eval_script/coco.py</span><br><span class="line">$/Cityscapes/shanshanzhang-citypersons/evaluation/eval_script/eval_demo.py</span><br><span class="line">$/Cityscapes/shanshanzhang-citypersons/evaluation/eval_script/eval_MR_multisetup.py</span><br><span class="line"></span><br><span class="line">#注释文件</span><br><span class="line">$/Cityscapes/shanshanzhang-citypersons/annotations</span><br><span class="line">$/Cityscapes/shanshanzhang-citypersons/annotations/anno_train.mat</span><br><span class="line">$/Cityscapes/shanshanzhang-citypersons/annotations/anno_val.mat</span><br><span class="line">$/Cityscapes/shanshanzhang-citypersons/annotations/README.txt</span><br><span class="line">#图片数据</span><br><span class="line"></span><br><span class="line">$/Cityscapes/leftImg8bit/train/*</span><br><span class="line">$/Cityscapes/leftImg8bit/val/*</span><br><span class="line">$/Cityscapes/leftImg8bit/test/*</span><br></pre></td></tr></table></figure><p>注释文件格式 <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br></pre></td><td class="code"><pre><span class="line">CityPersons annotations</span><br><span class="line">(1) data structure:</span><br><span class="line"> one image per cell</span><br><span class="line"> in each cell, there are three fields: city_name; im_name; bbs (bounding box annotations)</span><br><span class="line"></span><br><span class="line">(2) bounding box annotation format:</span><br><span class="line"> one object instance per row:</span><br><span class="line"> [class_label, x1,y1,w,h, instance_id, x1_vis, y1_vis, w_vis, h_vis]</span><br><span class="line"></span><br><span class="line">(3) class label definition:</span><br><span class="line"> class_label =0: ignore regions (fake humans, e.g. people on posters, reflections etc.)</span><br><span class="line"> class_label =1: pedestrians</span><br><span class="line"> class_label =2: riders</span><br><span class="line"> class_label =3: sitting persons</span><br><span class="line"> class_label =4: other persons with unusual postures</span><br><span class="line"> class_label =5: group of people</span><br><span class="line"></span><br><span class="line">(4) boxes:</span><br><span class="line"> visible boxes [x1_vis, y1_vis, w_vis, h_vis] are automatically generated from segmentation masks;</span><br><span class="line"> (x1,y1) is the upper left corner.</span><br><span class="line"> if class_label==1 or 2</span><br><span class="line"> [x1,y1,w,h] is a well-aligned bounding box to the full body ;</span><br><span class="line"> else</span><br><span class="line"> [x1,y1,w,h] = [x1_vis, y1_vis, w_vis, h_vis];</span><br></pre></td></tr></table></figure></p><h3 id="caltech">Caltech</h3><figure><img src="1517407508293.png" alt="caltech"><figcaption>caltech</figcaption></figure><ul><li><a href="http://www.vision.caltech.edu/Image_Datasets/CaltechPedestrians/" target="_blank" rel="noopener"><strong>Caltech官网</strong></a> 更所细节请阅读这篇论文, <a href="http://www.vision.caltech.edu/Image_Datasets/CaltechPedestrians/files/PAMI12pedestrians.pdf" target="_blank" rel="noopener">[TAPAMI-2012] Pedestrian Detection: An Evaluation of the State of the Art</a></li></ul><figure><img src="./1534570096814.png" alt="Alt text"><figcaption>Alt text</figcaption></figure><h3 id="kitti">KITTI</h3><figure><img src="./1534569869602.png" alt="Alt text"><figcaption>Alt text</figcaption></figure><ul><li><a href="http://www.cvlibs.net/datasets/kitti/" target="_blank" rel="noopener"><strong>KITTI官网</strong></a></li></ul><h3 id="eurocity">EuroCity</h3><p><a href="https://eurocity-dataset.tudelft.nl/eval/overview/statistics" target="_blank" rel="noopener">EuroCity 官网</a></p><p><a href="http://intelligent-vehicles.org/wp-content/uploads/2019/04/braun2019tpami_eurocity_persons.pdf" target="_blank" rel="noopener">EuroCity Paper</a></p><ul><li>[TPAMI-2019] EuroCity Persons: A Novel Benchmark for Person Detection in Traffic Scenes</li></ul><p>With over 238200 person instances manually labeled in over 47300 images, EuroCity Persons is nearly one order of magnitude larger than person datasets used previously for benchmarking. Diversity is gained by recording this dataset throughout Europe.</p><figure><img src="./eurocity-01.png" alt="EuroCity-01"><figcaption>EuroCity-01</figcaption></figure><figure><img src="./eurocity-02.png" alt="EuroCity-02"><figcaption>EuroCity-02</figcaption></figure><table><thead><tr class="header"><th style="text-align: center;">Object Class</th><th style="text-align: center;"># objects (day)</th><th style="text-align: center;"># objects (night)</th><th style="text-align: center;"># objects (sum)</th></tr></thead><tbody><tr class="odd"><td style="text-align: center;">Pedestrian</td><td style="text-align: center;">183004</td><td style="text-align: center;">35309</td><td style="text-align: center;">218313</td></tr><tr class="even"><td style="text-align: center;">Rider</td><td style="text-align: center;">18216</td><td style="text-align: center;">1564</td><td style="text-align: center;">19780</td></tr></tbody></table><h3 id="crowdhuman">CrowdHuman</h3><p><a href="http://www.crowdhuman.org/" target="_blank" rel="noopener">CrowdHuman 主页</a></p><p><a href="https://arxiv.org/abs/1805.00123" target="_blank" rel="noopener">CrowdHuman Paper</a></p><figure><img src="./crowdhuman-20190918-01.png" alt="CrowdHuman-20190918-01"><figcaption>CrowdHuman-20190918-01</figcaption></figure><figure><img src="./crowdhuman-20190918-02.png" alt="CrowdHuman-20190918-02"><figcaption>CrowdHuman-20190918-02</figcaption></figure><figure><img src="./crowdhuman-20190918-03.png" alt="CrowdHuman-20190918-03"><figcaption>CrowdHuman-20190918-03</figcaption></figure><h2 id="性能比较">性能比较</h2><p>数据来自 <a href="https://bitbucket.org/shanshanzhang/citypersons/src/default/" target="_blank" rel="noopener">CityPersons</a> 官网。</p><table><colgroup><col style="width: 20%"><col style="width: 17%"><col style="width: 23%"><col style="width: 27%"><col style="width: 10%"></colgroup><thead><tr class="header"><th style="text-align: center;">Method</th><th style="text-align: center;">MR (Reasonable)</th><th style="text-align: center;">MR (Reasonable_small)</th><th style="text-align: center;">MR (Reasonable_occ=heavy)</th><th style="text-align: center;">MR (All)</th></tr></thead><tbody><tr class="odd"><td style="text-align: center;">YT-PedDet</td><td style="text-align: center;">8.41%</td><td style="text-align: center;">10.60%</td><td style="text-align: center;">37.88%</td><td style="text-align: center;">37.22%</td></tr><tr class="even"><td style="text-align: center;">STNet</td><td style="text-align: center;">9.78%</td><td style="text-align: center;">10.95%</td><td style="text-align: center;">36.16%</td><td style="text-align: center;">31.36%</td></tr><tr class="odd"><td style="text-align: center;">DVRNet</td><td style="text-align: center;">10.99%</td><td style="text-align: center;">15.68%</td><td style="text-align: center;">43.77%</td><td style="text-align: center;">41.48%</td></tr><tr class="even"><td style="text-align: center;">HBA-RCNN</td><td style="text-align: center;">11.06%</td><td style="text-align: center;">14.77%</td><td style="text-align: center;">43.61%</td><td style="text-align: center;">39.54%</td></tr><tr class="odd"><td style="text-align: center;">OR-CNN</td><td style="text-align: center;">11.32%</td><td style="text-align: center;">14.19%</td><td style="text-align: center;">51.43%</td><td style="text-align: center;">40.19%</td></tr><tr class="even"><td style="text-align: center;">Repultion Loss</td><td style="text-align: center;">11.48%</td><td style="text-align: center;">15.67%</td><td style="text-align: center;">52.59%</td><td style="text-align: center;">39.17%</td></tr><tr class="odd"><td style="text-align: center;">Adapted FasterRCNN</td><td style="text-align: center;">12.97%</td><td style="text-align: center;">37.24%</td><td style="text-align: center;">50.47%</td><td style="text-align: center;">43.86%</td></tr><tr class="even"><td style="text-align: center;">MS-CNN</td><td style="text-align: center;">13.32%</td><td style="text-align: center;">15.86%</td><td style="text-align: center;">51.88%</td><td style="text-align: center;">39.94%</td></tr></tbody></table>]]></content>
<summary type="html">
行人检测(Pedestrian Detection)论文整理,包含论文链接和代码地址。
</summary>
<category term="Pedestrian Detection" scheme="https://www.starlg.cn/categories/Pedestrian-Detection/"/>
<category term="Pedestrian Detection" scheme="https://www.starlg.cn/tags/Pedestrian-Detection/"/>
</entry>
<entry>
<title>Keras Tutorial</title>
<link href="https://www.starlg.cn/2018/08/14/Keras-Tutorial/"/>
<id>https://www.starlg.cn/2018/08/14/Keras-Tutorial/</id>
<published>2018-08-14T14:40:03.000Z</published>
<updated>2022-05-30T14:16:36.000Z</updated>
<content type="html"><![CDATA[<p>Github地址:<a href="https://github.com/xingkongliang/Keras-Tutorials" target="_blank" rel="noopener">here</a></p><h1 id="keras-tutorials">Keras-Tutorials</h1><blockquote><p>版本:0.0.1</p></blockquote><blockquote><p>作者:张天亮</p></blockquote><blockquote><p>邮箱:zhangtianliang13@mails.ucas.ac.cn</p></blockquote><p>Github 加载 .ipynb 的速度较慢,建议在 <a href="http://nbviewer.ipython.org/github/xingkongliang/Keras-Tutorials" target="_blank" rel="noopener">Nbviewer</a> 中查看该项目。</p><h2 id="简介">简介</h2><p>大部分内容来自keras项目中的<a href="https://github.com/fchollet/keras/tree/master/examples" target="_blank" rel="noopener">examples</a></p><h2 id="目录">目录</h2><ul><li><a href="https://github.com/xingkongliang/Keras-Tutorials/blob/master/01.mnist_mpl.ipynb" target="_blank" rel="noopener">01.多层感知机实现</a></li><li><a href="https://github.com/xingkongliang/Keras-Tutorials/blob/master/02.save_model.ipynb" target="_blank" rel="noopener">02.模型的保存</a></li><li><a href="https://github.com/xingkongliang/Keras-Tutorials/blob/master/03.load_model.ipynb" target="_blank" rel="noopener">03.模型的加载</a></li><li><a href="https://github.com/xingkongliang/Keras-Tutorials/blob/master/04.plot_acc_loss.ipynb" target="_blank" rel="noopener">04.绘制精度和损失曲线</a></li><li><a href="https://github.com/xingkongliang/Keras-Tutorials/blob/master/05.mnist_cnn.ipynb" target="_blank" rel="noopener">05.卷积神经网络实现</a></li><li><a href="https://github.com/xingkongliang/Keras-Tutorials/blob/master/06.cifar10_cnn.ipynb" target="_blank" rel="noopener">06.CIFAR10_cnn</a></li><li><a href="https://github.com/xingkongliang/Keras-Tutorials/blob/master/07.mnist_lstm.ipynb" target="_blank" rel="noopener">07.mnist_lstm</a></li><li><a href="https://github.com/xingkongliang/Keras-Tutorials/blob/master/08.vgg-16.ipynb" target="_blank" rel="noopener">08.VGG16调用</a></li><li><a href="https://github.com/xingkongliang/Keras-Tutorials/blob/master/09.conv_filter_visualization.ipynb" target="_blank" rel="noopener">09.卷积滤波器可视化</a></li><li><a href="https://github.com/xingkongliang/Keras-Tutorials/blob/master/10.variational_autoencoder.ipynb" target="_blank" rel="noopener">10.variational_autoencoder</a></li><li><a href="https://github.com/xingkongliang/Keras-Tutorials/blob/master/11.mnist_transfer_cnn.ipynb" target="_blank" rel="noopener">11.锁定层fine-tuning网络</a></li><li><a href="https://github.com/xingkongliang/Keras-Tutorials/blob/master/12.mnist_sklearn_wrapper.ipynb" target="_blank" rel="noopener">12.使用sklearn wrapper进行的参数搜索</a></li><li><a href="https://github.com/xingkongliang/Keras-Tutorials/blob/master/13.Keras_with_tensorflow.ipynb" target="_blank" rel="noopener">13.Keras和Tensorflow联合使用</a></li><li><a href="https://github.com/xingkongliang/Keras-Tutorials/blob/master/14.finetune_InceptionV3.ipynb" target="_blank" rel="noopener">14.Finetune InceptionV3样例</a></li><li><a href="https://github.com/xingkongliang/Keras-Tutorials/blob/master/15.autoencoder.ipynb" target="_blank" rel="noopener">15.自编码器</a></li><li><a href="https://github.com/xingkongliang/Keras-Tutorials/blob/master/16.Convolutional_autoencoder.ipynb" target="_blank" rel="noopener">16.卷积自编码器</a></li></ul><p>更多Keras使用方法请查看手册 - <a href="http://keras-cn.readthedocs.io/en/latest/" target="_blank" rel="noopener">中文手册</a> - <a href="https://keras.io/" target="_blank" rel="noopener">英文手册</a> - <a href="https://github.com/fchollet/keras" target="_blank" rel="noopener">github</a></p>]]></content>
<summary type="html">
Keras基本教程,jupyter notebook。
</summary>
<category term="Deep Learning" scheme="https://www.starlg.cn/categories/Deep-Learning/"/>
<category term="Keras" scheme="https://www.starlg.cn/tags/Keras/"/>
</entry>
</feed>