查看原文
其他

代码实践 | CVPR2020——AdderNet(加法网络)迁移到检测网络(代码分享)

计算机视觉 计算机视觉研究院 2023-01-25

点击蓝字关注我们







扫码关注我们

公众号 : 计算机视觉战队

扫码回复:加法网络,获取源码论文链接

记得前段时间“计算机视觉研究院”推送了一篇关于CVPR2020最佳分类的文献(链接:CVPR2020最佳目标检测 | AdderNet(加法网络)含论文及源码链接),其中有同学问可以把这个新的分类框架嫁接到检测网络,会有提升吗?今天就通过实验来告诉大家会有怎样的提升?

前景回顾


估计已经有同学忘记加法网络的框架和精髓了,我们先简单回归一下具体的框架细节。

研究人员和开发人员习惯于将卷积作为默认操作,从视觉数据中提取特征,并引入各种方法来加速卷积,即使存在牺牲网络能力的风险。但几乎没有人试图用另一种更有效的相似性度量来取代卷积。事实上,加法的计算复杂度要比乘法低得多。因此,作者有动机研究用卷积神经网络中的加法代替乘法的可行性。

下面可以回顾下不同卷积可视化结果:

具体的代码流程如下:

接下来我们就开始对其代码进行修改,将其嫁接到yolov3框架中,看看可以发生哪些奇妙的变化?

Pytorch-Yolo大家应该都会安装,都可以训练测试,现在我们直接开始吧!我首先拿pytorch-yolov3的部分代码举例子:

上面是直接将Darknet主干网络写死在检测部分的,其实这样的框架是不好的,比较偏向于定制性,所以大家可以自己写一个Backbone函数DetectHead函数,这样写是有道理,往下看就知道为什么要这样写?

这样backbone就是一个模块,可以随意调用你想要的主干网络,而且还可以自己随意抽取对应的检测头,那接下来我们看看加法网络(AdderNet)的代码:

根据论文(链接:加法网络(AdderNet)链接)修改对应的卷积,然后本次应用在ResNet50网络中,那我们再把ResNet50中的Conv替换,我们继续看下去:

接下来就是简单的环节了,替换修改好的Backbone网络,然后在相应数据集中训练测试,观察之间的差别。为了观察更加仔细,就不详细给出具体的输出结果了,具体如下:

+ Class '0' (person) - AP: 0.69071601970752+ Class '1' (bicycle) - AP: 0.4686961863448047+ Class '2' (car) - AP: 0.584785409652401+ Class '3' (motorbike) - AP: 0.6173425471546101+ Class '4' (aeroplane) - AP: 0.7368216071089109+ Class '5' (bus) - AP: 0.7522709365644746+ Class '6' (train) - AP: 0.754366135549987+ Class '7' (truck) - AP: 0.4188454158138422+ Class '8' (boat) - AP: 0.4055367699507446+ Class '9' (traffic light) - AP: 0.44435250125992093+ Class '10' (fire hydrant) - AP: 0.7803236133317674+ Class '11' (stop sign) - AP: 0.7203250980406222+ Class '12' (parking meter) - AP: 0.5318708513711929+ Class '13' (bench) - AP: 0.33347708090637457+ Class '14' (bird) - AP: 0.4441360921558241+ Class '15' (cat) - AP: 0.7303504067363646+ Class '16' (dog) - AP: 0.7319887348116905+ Class '17' (horse) - AP: 0.77512155236337+ Class '18' (sheep) - AP: 0.5984679238272702+ Class '19' (cow) - AP: 0.5233874581223704+ Class '20' (elephant) - AP: 0.8563788399614207+ Class '21' (bear) - AP: 0.7462024921293304+ Class '22' (zebra) - AP: 0.7870769691158629+ Class '23' (giraffe) - AP: 0.8227873134751092+ Class '24' (backpack) - AP: 0.32451636624665287+ Class '25' (umbrella) - AP: 0.5271238663832635+ Class '26' (handbag) - AP: 0.20446396737325406+ Class '27' (tie) - AP: 0.49596217809096577+ Class '28' (suitcase) - AP: 0.569835653931444+ Class '29' (frisbee) - AP: 0.6356266022474135+ Class '30' (skis) - AP: 0.40624013441992135+ Class '31' (snowboard) - AP: 0.4548600158139028+ Class '32' (sports ball) - AP: 0.5431383703116072+ Class '33' (kite) - AP: 0.4099711653381243+ Class '34' (baseball bat) - AP: 0.5038339063455582+ Class '35' (baseball glove) - AP: 0.47781969136825725+ Class '36' (skateboard) - AP: 0.6849120730914782+ Class '37' (surfboard) - AP: 0.6221252845246673+ Class '38' (tennis racket) - AP: 0.68764570668767+ Class '39' (bottle) - AP: 0.4228582945038891+ Class '40' (wine glass) - AP: 0.5107649160534952+ Class '41' (cup) - AP: 0.4708999794256628+ Class '42' (fork) - AP: 0.44107168135464947+ Class '43' (knife) - AP: 0.288951366082318+ Class '44' (spoon) - AP: 0.21264460558898557+ Class '45' (bowl) - AP: 0.4882936721018784+ Class '46' (banana) - AP: 0.27481021398716976+ Class '47' (apple) - AP: 0.17694573390321539+ Class '48' (sandwich) - AP: 0.4595098054471395+ Class '49' (orange) - AP: 0.2861568847973789+ Class '50' (broccoli) - AP: 0.34978362407336433+ Class '51' (carrot) - AP: 0.22371776472064184+ Class '52' (hot dog) - AP: 0.3702692586995472+ Class '53' (pizza) - AP: 0.5297757751733385+ Class '54' (donut) - AP: 0.5068384767127795+ Class '55' (cake) - AP: 0.476632708387989+ Class '56' (chair) - AP: 0.3980449296511249+ Class '57' (sofa) - AP: 0.5214086539073353+ Class '58' (pottedplant) - AP: 0.4239751120301045+ Class '59' (bed) - AP: 0.6338351737747959+ Class '60' (diningtable) - AP: 0.4138012499478281+ Class '61' (toilet) - AP: 0.7377284037968452+ Class '62' (tvmonitor) - AP: 0.6991588571748895+ Class '63' (laptop) - AP: 0.68712851664284+ Class '64' (mouse) - AP: 0.7214480416511962+ Class '65' (remote) - AP: 0.4789729416954784+ Class '66' (keyboard) - AP: 0.6644829934265277+ Class '67' (cell phone) - AP: 0.39743578548434444+ Class '68' (microwave) - AP: 0.6423763095621656+ Class '69' (oven) - AP: 0.48313299304876195+ Class '70' (toaster) - AP: 0.16233766233766234+ Class '71' (sink) - AP: 0.5075074098080213+ Class '72' (refrigerator) - AP: 0.6862896780296917+ Class '73' (book) - AP: 0.17111744621852634+ Class '74' (clock) - AP: 0.6886459682881512+ Class '75' (vase) - AP: 0.44157962279267704+ Class '76' (scissors) - AP: 0.3437987832196098+ Class '77' (teddy bear) - AP: 0.5859590979304399+ Class '78' (hair drier) - AP: 0.11363636363636365+ Class '79' (toothbrush) - AP: 0.2643722437438991

可视化结果如下(还是有较大的差别):







如果大家感兴趣,后期“计算机视觉研究院”可以单独出一版搭建框架的推文,或者我们在线直播进行讲解答疑,有兴趣的可以私信我们,我们一定会及时给你回复!感谢对“计算机视觉研究院”的关注!





如果想加入我们“计算机视觉研究院”,请扫二维码加入我们。我们会按照你的需求将你拉入对应的学习群!计算机视觉研究院主要涉及深度学习领域,主要致力于人脸检测、人脸识别,多目标检测、目标跟踪、图像分割等研究方向。研究院接下来会不断分享最新的论文算法新框架,我们这次改革不同点就是,我们要着重”研究“。之后我们会针对相应领域分享实践过程,让大家真正体会摆脱理论的真实场景,培养爱动手编程爱动脑思考的习惯!

扫码加入我们

您可能也对以下帖子感兴趣

文章有问题?点此查看未经处理的缓存