求帮忙迭代竖式计算一年级个式子

高等数学里的“迭代式”是哪种式子?_百度知道
高等数学里的“迭代式”是哪种式子?
我有更好的答案
用定义证明极限都是格式的写法,依样画葫芦就是:任意给定ε&0,要使|(1+x³)/(2x³)-(1/2)| = (1/2)|1/x³| & ε,只须 |x| & 1/[³√(2ε)],取 X = 1/[³√(2ε)]& 0,则当 |x| & X 时,就有|(1+x³)/(2x³)-(1/2)| = (1/2)|1/x³| & (1/2)(1/X³) = ε,根据极限的定义,得证。
你为什么不去世呢?
为您推荐:
其他类似问题
换一换
回答问题,赢新手礼包
个人、企业类
违法有害信息,请在下方选择后提交
色情、暴力
我们会通过消息、邮箱等方式尽快将举报结果通知您。直到有一天,我在网上看到一副图片,才知道了豆娘的良苦用心:&br&&br&&figure&&img src=&https://pic4.zhimg.com/50/be21b061f99694_b.jpg& data-rawwidth=&440& data-rawheight=&1091& class=&origin_image zh-lightbox-thumb& width=&440& data-original=&https://pic4.zhimg.com/50/be21b061f99694_r.jpg&&&/figure&
直到有一天,我在网上看到一副图片,才知道了豆娘的良苦用心:
说几个网站吧,如果不符合题目要求,请折叠&br&------------------------------------------------------------------------------------------------------------------------------------------&br&0.【&b&制造的原理&/b&】&br&&a href=&//link.zhihu.com/?target=http%3A//www.sciencechannel.com/tv-shows/how-its-made& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&How It's Made官网 &/a&&a href=&//link.zhihu.com/?target=http%3A//baike.baidu.com/link%3Furl%3DLO_TeJOe2zL-FbpMT3HUrmKKnA--AxXT9N84G7wFA_QcGsxA5H_lsUvglS4AszL2k85Y_K5255BbZQd7CxfCdq& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&How it's made_百度百科&/a&&a href=&//link.zhihu.com/?target=http%3A//en.wikipedia.org/wiki/How_It%27s_Made& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&How It's Made 维基百科&/a&&br&目前我能下载到第21季 S21.里面很多产品,视频中展示的生产厂家都是业内顶级,比如S15x08 第15季第8集 中的 折叠自行车就是上文中提到的英国小布自行车&b&BROMPTON&/b&,第S20x03 中提到的 指甲钳产自德国刀城索林根(&b&Solingen&/b&),第S20x07 中提到的拉杆行李箱 是 意大利 的龙卡多品牌。总之 好多,21*13*4= 1K 大约这么多吧。&br&目录文件:
&a href=&//link.zhihu.com/?target=http%3A//pan.baidu.com/s/1c0FxCq8& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&制造的原理 How it's made &/a&&br&&br&1.搜索信息,【关键字】很重要,知网翻译助手是选择合适的 &b&&u&英文关键字词&/u&&/b&的好工具&br&&a href=&//link.zhihu.com/?target=http%3A//dict.cnki.net/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&CNKI翻译助手&/a&&br&&br&2.想要搜索【特定类型】的文件,如 pdf,xls,wav,除了google中应用语法 filetype:pdf XXX,之外,尝试一下这个网站&br&&a href=&//link.zhihu.com/?target=http%3A//www.findthataudio.com/about.php& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&find that files&/a&&br&&br&3.想要下载一本书的【随书光盘】,试试这个,更有效率&br&&a href=&//link.zhihu.com/?target=http%3A//58.194.172.26/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&山东高校网络图书馆非数资料管理系统&/a&&br&&br&4.纪录片之家:&a href=&//link.zhihu.com/?target=http%3A//www.jlpzj.net/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&纪录片之家-有你更加精彩&/a& 专注纪录片分享&br&&br&5.小众软件:&a href=&//link.zhihu.com/?target=http%3A//www.appinn.com/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&小众软件&/a&&br&更专业的:&a href=&//link.zhihu.com/?target=http%3A//xbeta.info/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&善用佳软&/a&&br&绿色便携:&a href=&//link.zhihu.com/?target=http%3A//www.portablesoft.org/about/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&精品绿色便携软件&/a&&br&&br&6.&a href=&//link.zhihu.com/?target=http%3A//www.solidot.org/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Solidot: 奇客的资讯,重要的东西&/a&&br&7.可以看扫描版的参考消息报,浏览此页一定记得安装 【屏蔽广告】的插件&br&&a href=&//link.zhihu.com/?target=http%3A//joowii.com/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&参考消息电子版在线阅读&/a&&br&&br&8.学机械的必收藏,关于【3D模型】分享,了解机器的运作,开阔你的眼界&br&&a href=&//link.zhihu.com/?target=http%3A//grabcad.com/home& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&GrabCAD&/a& 包含各种各种3D模型,可下载3D CAD文件,比如NX ,solidworks等格式的&br&3D【打印】模型搜索引擎:&a href=&//link.zhihu.com/?target=http%3A//www.yeggi.com/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Printable 3D Models Search Engine&/a&&br&&br&9.饭墙利器:chrome内核的浏览器增强插件,使用很方便&br&&a href=&//link.zhihu.com/?target=http%3A//sksd.ga%3A8080/invi/inlvOf& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&时空隧道插件安装教程&/a& 邀请码 inlvOf
说几个网站吧,如果不符合题目要求,请折叠 ------------------------------------------------------------------------------------------------------------------------------------------ 0.【制造的原理】
考虑洛伦兹变换 &img src=&//www.zhihu.com/equation?tex=x%27%5E%5Cmu++%3D+%5CLambda+%5E%5Cmu%28x%29& alt=&x'^\mu
= \Lambda ^\mu(x)& eeimg=&1&&,使得原时间隔&img src=&//www.zhihu.com/equation?tex=c%5E2d%5Ctau%5E2+%3D+c%5E2+dt%5E2+-+d%5Cbm+x%5E2& alt=&c^2d\tau^2 = c^2 dt^2 - d\bm x^2& eeimg=&1&&不变,即,&br&&img src=&//www.zhihu.com/equation?tex=%5Ceta_%7B%5Cmu%5Cnu%7D+dx%5E%5Cmu+dx%5E%5Cnu+%3D+%5Ceta_%7B%5Crho%5Csigma%7Ddx%27%5E%5Crho+dx%27%5E%5Csigma& alt=&\eta_{\mu\nu} dx^\mu dx^\nu = \eta_{\rho\sigma}dx'^\rho dx'^\sigma& eeimg=&1&&。这里&img src=&//www.zhihu.com/equation?tex=%5Ceta_%7B%5Cmu%5Cnu%7D+%3D+%5Cmathrm%7Bdiag%7D%5C%7B%2B1%2C+-1%2C+-1%2C+-1%5C%7D& alt=&\eta_{\mu\nu} = \mathrm{diag}\{+1, -1, -1, -1\}& eeimg=&1&&。注意到,其逆矩阵为其自己&img src=&//www.zhihu.com/equation?tex=%5Ceta%5E%7B%5Cmu%5Cnu%7D+%3D+%5Ceta_%7B%5Cmu%5Cnu%7D& alt=&\eta^{\mu\nu} = \eta_{\mu\nu}& eeimg=&1&&。这样以来,洛伦滋变换需要满足:&br&&blockquote&&img src=&//www.zhihu.com/equation?tex=%5Ceta_%7B%5Cmu%5Cnu%7D+%5Cfrac%7B%5Cpartial+%5CLambda%5E%5Cmu%7D%7B%5Cpartial+x%5E%5Csigma+%7D+%5Cfrac%7B%5Cpartial+%5CLambda%5E%5Cnu+%7D%7B%5Cpartial+x%5E%5Crho%7D+%3D+%5Ceta_%7B%5Csigma%5Crho%7D& alt=&\eta_{\mu\nu} \frac{\partial \Lambda^\mu}{\partial x^\sigma } \frac{\partial \Lambda^\nu }{\partial x^\rho} = \eta_{\sigma\rho}& eeimg=&1&&........ [1]&/blockquote&两边求导可得:&br&&blockquote&&img src=&//www.zhihu.com/equation?tex=%5Ceta_%7B%5Cmu%5Cnu%7D+%5Cfrac%7B%5Cpartial%5E2+%5CLambda%5E%5Cmu%7D%7B%5Cpartial+x%5E%5Csigma%5Cpartial+x%5E%5Clambda%7D%5Cfrac%7B%5Cpartial+%5CLambda%5E%5Cnu%7D%7B%5Cpartial+x%5E%5Crho%7D+%2B+%5Ceta_%7B%5Cmu%5Cnu%7D+%5Cfrac%7B%5Cpartial+%5CLambda%5E%5Cmu%7D%7B%5Cpartial+x%5E%5Csigma%7D%5Cfrac%7B%5Cpartial%5E2+%5CLambda%5E%5Cnu%7D%7B%5Cpartial+x%5E%5Crho%5Cpartial+x%5E%5Clambda%7D++%3D+0& alt=&\eta_{\mu\nu} \frac{\partial^2 \Lambda^\mu}{\partial x^\sigma\partial x^\lambda}\frac{\partial \Lambda^\nu}{\partial x^\rho} + \eta_{\mu\nu} \frac{\partial \Lambda^\mu}{\partial x^\sigma}\frac{\partial^2 \Lambda^\nu}{\partial x^\rho\partial x^\lambda}
= 0& eeimg=&1&& ........ [2]&/blockquote&将式[2]里指标&img src=&//www.zhihu.com/equation?tex=%5Csigma%2C+%5Clambda& alt=&\sigma, \lambda& eeimg=&1&&对换,得到:&br&&blockquote&&img src=&//www.zhihu.com/equation?tex=%5Ceta_%7B%5Cmu%5Cnu%7D+%5Cfrac%7B%5Cpartial%5E2+%5CLambda%5E%5Cmu%7D%7B%5Cpartial+x%5E%5Clambda%5Cpartial+x%5E%5Csigma%7D%5Cfrac%7B%5Cpartial+%5CLambda%5E%5Cnu%7D%7B%5Cpartial+x%5E%5Crho%7D+%2B+%5Ceta_%7B%5Cmu%5Cnu%7D+%5Cfrac%7B%5Cpartial+%5CLambda%5E%5Cmu%7D%7B%5Cpartial+x%5E%5Clambda%7D%5Cfrac%7B%5Cpartial%5E2+%5CLambda%5E%5Cnu%7D%7B%5Cpartial+x%5E%5Crho%5Cpartial+x%5E%5Csigma%7D++%3D+0& alt=&\eta_{\mu\nu} \frac{\partial^2 \Lambda^\mu}{\partial x^\lambda\partial x^\sigma}\frac{\partial \Lambda^\nu}{\partial x^\rho} + \eta_{\mu\nu} \frac{\partial \Lambda^\mu}{\partial x^\lambda}\frac{\partial^2 \Lambda^\nu}{\partial x^\rho\partial x^\sigma}
= 0& eeimg=&1&& ........ [3]&/blockquote&将式[2]中指标&img src=&//www.zhihu.com/equation?tex=%5Crho%2C%5Clambda& alt=&\rho,\lambda& eeimg=&1&&对换,可得:&br&&blockquote&&img src=&//www.zhihu.com/equation?tex=%5Ceta_%7B%5Cmu%5Cnu%7D+%5Cfrac%7B%5Cpartial%5E2+%5CLambda%5E%5Cmu%7D%7B%5Cpartial+x%5E%5Csigma%5Cpartial+x%5E%5Crho%7D%5Cfrac%7B%5Cpartial+%5CLambda%5E%5Cnu%7D%7B%5Cpartial+x%5E%5Clambda%7D+%2B+%5Ceta_%7B%5Cmu%5Cnu%7D+%5Cfrac%7B%5Cpartial+%5CLambda%5E%5Cmu%7D%7B%5Cpartial+x%5E%5Csigma%7D%5Cfrac%7B%5Cpartial%5E2+%5CLambda%5E%5Cnu%7D%7B%5Cpartial+x%5E%5Clambda%5Cpartial+x%5E%5Crho%7D++%3D+0& alt=&\eta_{\mu\nu} \frac{\partial^2 \Lambda^\mu}{\partial x^\sigma\partial x^\rho}\frac{\partial \Lambda^\nu}{\partial x^\lambda} + \eta_{\mu\nu} \frac{\partial \Lambda^\mu}{\partial x^\sigma}\frac{\partial^2 \Lambda^\nu}{\partial x^\lambda\partial x^\rho}
= 0& eeimg=&1&& ........ [4]&/blockquote&[2]+[3]-[4],并注意到求导顺序可以交换,并且&img src=&//www.zhihu.com/equation?tex=%5Ceta_%7B%5Cmu%5Cnu%7D+%3D+%5Ceta_%7B%5Cnu%5Cmu%7D& alt=&\eta_{\mu\nu} = \eta_{\nu\mu}& eeimg=&1&&,可得:&br&&img src=&//www.zhihu.com/equation?tex=%5Cbegin%7Bsplit%7D%0A%26+2%5Ceta_%7B%5Cmu%5Cnu%7D%5Cfrac%7B%5Cpartial%5E2+%5CLambda%5E%5Cmu%7D%7B%5Cpartial+x%5E%5Csigma+%5Cpartial+x%5E%5Clambda%7D+%5Cfrac%7B%5Cpartial%5CLambda%5E%5Cnu%7D%7B%5Cpartial+x%5E%5Crho%7D+%2B+%5Ceta_%7B%5Cmu%5Cnu%7D%5Cfrac%7B%5Cpartial+%5CLambda%5E%5Cmu%7D%7B%5Cpartial+x%5E%5Clambda%7D%5Cfrac%7B%5Cpartial%5E2+%5CLambda%5E%5Cnu%7D%7B%5Cpartial+x%5E%5Crho+%5Cpartial+x%5E%5Csigma%7D+-+%5Ceta_%7B%5Cmu%5Cnu%7D+%5Cfrac%7B%5Cpartial+%5CLambda%5E%5Cnu%7D%7B%5Cpartial+x%5E%5Clambda%7D+%5Cfrac%7B%5Cpartial%5E2+%5CLambda%5E%5Cmu%7D%7B%5Cpartial+x%5E%5Crho+%5Cpartial+x%5E%5Csigma%7D++%3D+0+%5C%5C%0A%5CLeftrightarrow%0A%26+2%5Ceta_%7B%5Cmu%5Cnu%7D%5Cfrac%7B%5Cpartial%5E2+%5CLambda%5E%5Cmu%7D%7B%5Cpartial+x%5E%5Csigma+%5Cpartial+x%5E%5Clambda%7D+%5Cfrac%7B%5Cpartial%5CLambda%5E%5Cnu%7D%7B%5Cpartial+x%5E%5Crho%7D+%3D0+%0A%5Cend%7Bsplit%7D& alt=&\begin{split}
& 2\eta_{\mu\nu}\frac{\partial^2 \Lambda^\mu}{\partial x^\sigma \partial x^\lambda} \frac{\partial\Lambda^\nu}{\partial x^\rho} + \eta_{\mu\nu}\frac{\partial \Lambda^\mu}{\partial x^\lambda}\frac{\partial^2 \Lambda^\nu}{\partial x^\rho \partial x^\sigma} - \eta_{\mu\nu} \frac{\partial \Lambda^\nu}{\partial x^\lambda} \frac{\partial^2 \Lambda^\mu}{\partial x^\rho \partial x^\sigma}
\Leftrightarrow
& 2\eta_{\mu\nu}\frac{\partial^2 \Lambda^\mu}{\partial x^\sigma \partial x^\lambda} \frac{\partial\Lambda^\nu}{\partial x^\rho} =0
\end{split}& eeimg=&1&&&br&只要假设坐标变换是可逆的,便可以消去雅科比矩阵&img src=&//www.zhihu.com/equation?tex=J%5E%5Cnu_%5Crho+%5Cequiv++%5Cfrac%7B%5Cpartial+%5CLambda%5E%5Cnu%7D%7B%5Cpartial+x%5E%5Crho%7D& alt=&J^\nu_\rho \equiv
\frac{\partial \Lambda^\nu}{\partial x^\rho}& eeimg=&1&& 和&img src=&//www.zhihu.com/equation?tex=%5Ceta_%7B%5Cmu%5Cnu%7D& alt=&\eta_{\mu\nu}& eeimg=&1&&得到,&br&&img src=&//www.zhihu.com/equation?tex=%7B%5CHuge+%5Cboxed%7B+%5Cfrac%7B%5Cpartial%5E2+%5CLambda%5E%5Cmu%7D%7B%5Cpartial+x%5E%5Csigma+%5Cpartial+x%5E%5Clambda%7D%3D+0%7D+%7D& alt=&{\Huge \boxed{ \frac{\partial^2 \Lambda^\mu}{\partial x^\sigma \partial x^\lambda}= 0} }& eeimg=&1&&&br&&br&&b&&u&补充与讨论&/u&:&/b&&br&&ol&&li&&img src=&//www.zhihu.com/equation?tex=%5Cfrac%7B%5Cpartial+%5E2+%5CLambda%5E%5Cmu%7D%7B%5Cpartial+x%5E%5Csigma+%5Cpartial+x%5E%5Crho%7D+%3D+%5CGamma%5E%5Clambda_%7B%5Csigma%5Crho%7D+%5Cfrac%7B%5Cpartial%5CLambda%5E%5Cmu%7D%7B%5Cpartial+x%5E%5Clambda%7D& alt=&\frac{\partial ^2 \Lambda^\mu}{\partial x^\sigma \partial x^\rho} = \Gamma^\lambda_{\sigma\rho} \frac{\partial\Lambda^\mu}{\partial x^\lambda}& eeimg=&1&&, &img src=&//www.zhihu.com/equation?tex=%5CGamma& alt=&\Gamma& eeimg=&1&&叫做仿射联络;它与时空曲率密切相关,在弱场近似下,&img src=&//www.zhihu.com/equation?tex=%5CGamma%5Ei_%7B00%7D& alt=&\Gamma^i_{00}& eeimg=&1&&就是引力(包含了惯性力)的场强。&br&&/li&&li&上面得证明基于&img src=&//www.zhihu.com/equation?tex=%5Ceta_%7B%5Cmu%5Cnu%7D%3D%5Cmathrm%7Bdiag%7D%5C%7B%2B1%2C-1%2C-1%2C-1%5C%7D& alt=&\eta_{\mu\nu}=\mathrm{diag}\{+1,-1,-1,-1\}& eeimg=&1&&, 即直角坐标系;若采用曲线坐标,洛伦滋变幻不一定是线性。&/li&&li&洛伦滋变换将闵可夫斯基空间中的直线变换到直线,这与洛伦滋变换是惯性系之间的变换相容。当然,我们可以问相反的问题,是否所有联系惯性系之间的(光滑的)坐标变换都是线性的?换句话说,将闵可夫斯基空间中直线变换为直线的做标变换是否都是线性的(允许一个平移操作)?仿射几何给出肯定的回答。&/li&&li&当然在相对论中定义惯性参考系时,我们应当仅考虑速度小于光速的粒子的匀速直线运动。因此上面的问题可以加强为,将闵可夫斯基空间中“速度”小于(等于)光速的直线变换为“速度”小于光速(等于)的直线的坐标变换是否都是线性的? 注意,上面所说的速度不是几何中定义的直线速度&img src=&//www.zhihu.com/equation?tex=dx%5E%5Cmu%2Fds& alt=&dx^\mu/ds& eeimg=&1&&(这里&img src=&//www.zhihu.com/equation?tex=s& alt=&s& eeimg=&1&&为某一描写直线的参数),而是真实的速度&img src=&//www.zhihu.com/equation?tex=v+%3D+%5Csqrt%7Bd%5Cbm+x%5E2%7D%2Fdt& alt=&v = \sqrt{d\bm x^2}/dt& eeimg=&1&& (或者按照几何上叫法叫斜率)。这个答案也是肯定的。&/li&&/ol&&br&&figure&&img src=&https://pic1.zhimg.com/50/623f5b1bb26f_b.jpg& data-rawwidth=&208& data-rawheight=&326& class=&content_image& width=&208&&&/figure&
考虑洛伦兹变换 x'^\mu = \Lambda ^\mu(x),使得原时间隔c^2d\tau^2 = c^2 dt^2 - d\bm x^2不变,即, \eta_{\mu\nu} dx^\mu dx^\nu = \eta_{\rho\sigma}dx'^\rho dx'^\sigma。这里\eta_{\mu\nu} = \mathrm{diag}\{+1, -1, -1, -1\}。注意到,其逆矩阵为其…
&figure&&img src=&https://pic2.zhimg.com/v2-75d152a350c19b9ec4d5b1_b.jpg& data-rawwidth=&800& data-rawheight=&469& class=&origin_image zh-lightbox-thumb& width=&800& data-original=&https://pic2.zhimg.com/v2-75d152a350c19b9ec4d5b1_r.jpg&&&/figure&&blockquote&一、卷积&/blockquote&&p&&br&&/p&&p&我们在 2 维上说话。有两个 &img src=&https://www.zhihu.com/equation?tex=%5Cmathcal%7BR%7D%5E2%5Crightarrow+%5Cmathcal%7BR%7D& alt=&\mathcal{R}^2\rightarrow \mathcal{R}& eeimg=&1&&的函数 f(x, y) 和 g(x, y) 。所谓 f 和 g 的卷积就是一个新的 &img src=&https://www.zhihu.com/equation?tex=%5Cmathcal%7BR%7D%5E2%5Crightarrow+%5Cmathcal%7BR%7D& alt=&\mathcal{R}^2\rightarrow \mathcal{R}& eeimg=&1&&的函数 c(x, y) 。通过下式得到:&/p&&p&&br&&/p&&p&&img src=&https://www.zhihu.com/equation?tex=c%28x%2Cy%29%3D%5Cint_%7B-%5Cinfty%7D%5E%7B%5Cinfty%7D+%5Cint_%7B-%5Cinfty%7D%5E%7B%5Cinfty%7Df%28s%2Ct%29%5Ctimes+g%28x-s%2Cy-t%29+%5C+ds+%5C+dt& alt=&c(x,y)=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty}f(s,t)\times g(x-s,y-t) \ ds \ dt& eeimg=&1&&&/p&&p&这式子的含义是:遍览从负无穷到正无穷的全部 s 和 t 值,把 g 在 (x-s, y-t) 上的值乘以 f 在 (s, t) 上的值之后再“加和”到一起(&i&积分意义上&/i&),得到 c 在 (x, y) 上的值。说白了卷积就是一种“加权求和”:以 f 为权,以 (x, y) 为中心,把 g 距离中心 (-s, -t) 位置上的值乘上 f 在 (s, t) 的值,最后加到一起。把卷积公式写成离散形式就更清楚了:&/p&&p&&img src=&https://www.zhihu.com/equation?tex=C%28x%2Cy%29%3D%5Csum_%7Bt%3D-%5Cinfty%7D%5E%7B%5Cinfty%7D%5Csum_%7Bs%3D-%5Cinfty%7D%5E%7B%5Cinfty%7DF%28s%2Ct%29%5Ctimes+G%28x-s%2Cy-t%29+%5C+%5CDelta+s+%5C+%5CDelta+t%3D%5Csum_%7Bt%3D-%5Cinfty%7D%5E%7B%5Cinfty%7D%5Csum_%7Bs%3D-%5Cinfty%7D%5E%7B%5Cinfty%7DF%28s%2Ct%29%5Ctimes+G%28x-s%2Cy-t%29& alt=&C(x,y)=\sum_{t=-\infty}^{\infty}\sum_{s=-\infty}^{\infty}F(s,t)\times G(x-s,y-t) \ \Delta s \ \Delta t=\sum_{t=-\infty}^{\infty}\sum_{s=-\infty}^{\infty}F(s,t)\times G(x-s,y-t)& eeimg=&1&&&/p&&p&第二个等号成立是因为在这里我们每隔单位长度 1 一采样,&img src=&https://www.zhihu.com/equation?tex=%5CDelta+s& alt=&\Delta s& eeimg=&1&&和&img src=&https://www.zhihu.com/equation?tex=%5CDelta+t& alt=&\Delta t& eeimg=&1&&都是 1 。可以令 G 表示一幅 100 x 100 大小的灰度图像。G(x, y) 取值 [0,255] 区间内的整数,是图像在 (x, y) 的灰度值。x 和 y 坐标取 [0, 99] ,其它位置上 G 值全取 0 。令 F 在 s 和 t 取 {-1, 0, 1} 的位置为特定值,其他位置全取 0 。F 可以看作是一个 3 x 3 的网格。如图 1.1&/p&&p&&br&&/p&&figure&&img src=&https://pic1.zhimg.com/v2-8eedb4e4e9d168e8bf792dbe_b.jpg& data-size=&normal& data-rawwidth=&612& data-rawheight=&387& class=&origin_image zh-lightbox-thumb& width=&612& data-original=&https://pic1.zhimg.com/v2-8eedb4e4e9d168e8bf792dbe_r.jpg&&&figcaption&图 1.1&/figcaption&&/figure&&p&&br&&/p&&p&&br&&/p&&p&&br&&/p&&p&图 1.1 中 G 每个小格子里的值就是图像在 (x, y) 的灰度值。F 每个小格子里的值就是 F 在 (s, t) 的取值。&/p&&figure&&img src=&https://pic4.zhimg.com/v2-5cd6caed39f143bca88553_b.jpg& data-size=&normal& data-rawwidth=&640& data-rawheight=&480& class=&origin_image zh-lightbox-thumb& width=&640& data-original=&https://pic4.zhimg.com/v2-5cd6caed39f143bca88553_r.jpg&&&figcaption&图 1.2&/figcaption&&/figure&&p&&br&&/p&&p&&br&&/p&&p&如图 1.2 所示,将 F 的中心 (0, 0) 对准 G 的 (6, 6) 。把 F 和 G 对应的 9 个位置上各自的值相乘,再将 9 个乘积加在一起,就得到了卷积值 C(6, 6) 。对 G 的每一个位置求 C 值,就得到了一幅新的图像。其中注意三点:&/p&&ol&&li&F 是上下左右翻转后再与 G 对准的。因为卷积公式中 F(s, t) 乘上的是 G(x&b&-s&/b&, y&b&-t&/b&) 。比如 F(-1, -1) 乘上的是 G(7, 7) ;&/li&&li&如果 F 的所有值之和不等于 1.0,则 C 值有可能不落在 [0, 255] 区间内,那就不是一个合法的图像灰度值。所以如果需要让结果是一幅图像,就得将 F 归一化——令它的所有位置之和等于 1.0 ;&/li&&li&对于 G 边缘上的点,有可能它的周围位置超出了图像边缘。此时可以把图像边缘之外的值当做 0 。或者只计算其周围都不超边缘的点的 C 。这样计算出来的图像就比原图像小一些。在上例中是小了一圈,如果 F 覆盖范围更大,那么小的圈数更多。&/li&&/ol&&p&上述操作其实就是对数字图像进行&b&离散卷积&/b&操作,又叫&b&滤波&/b&。F 称作&b&卷积核&/b&或&b&滤波器&/b&。不同的滤波器起不同的作用。想象一下,如果 F 的大小是 3 x 3 ,每个格子里的值都是 1/9 。那么滤波就相当于对原图像每一个点计算它周围 3 x 3 范围内 9 个图像点的灰度平均值。这应该是一种模糊。看看效果。&/p&&figure&&img src=&https://pic3.zhimg.com/v2-bc79a8f4d8d9bdc_b.jpg& data-size=&normal& data-rawwidth=&1216& data-rawheight=&386& class=&origin_image zh-lightbox-thumb& width=&1216& data-original=&https://pic3.zhimg.com/v2-bc79a8f4d8d9bdc_r.jpg&&&figcaption&图 1.3&/figcaption&&/figure&&p&&br&&/p&&p&左图是 lena 灰度原图。中图用 3 x 3 值都为 1/9 的滤波器去滤,得到一个轻微模糊的图像。模糊程度不高是因为滤波器覆盖范围小。右图选取了 9 x 9 值为 1/81 的滤波器,模糊效果就较明显了。滤波器还有许多其他用处。例如下面这个滤波器:&/p&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&o&&+----+----+----+&/span&
&span class=&o&&|&/span& &span class=&o&&-&/span&&span class=&mi&&1&/span& &span class=&o&&|&/span&
&span class=&mi&&0&/span& &span class=&o&&|&/span&
&span class=&mi&&1&/span& &span class=&o&&|&/span&
&span class=&o&&+----+----+----+&/span&
&span class=&o&&|&/span& &span class=&o&&-&/span&&span class=&mi&&2&/span& &span class=&o&&|&/span&
&span class=&mi&&0&/span& &span class=&o&&|&/span&
&span class=&mi&&2&/span& &span class=&o&&|&/span&
&span class=&o&&+----+----+----+&/span&
&span class=&o&&|&/span& &span class=&o&&-&/span&&span class=&mi&&1&/span& &span class=&o&&|&/span&
&span class=&mi&&0&/span& &span class=&o&&|&/span&
&span class=&mi&&1&/span& &span class=&o&&|&/span&
&span class=&o&&+----+----+----+&/span&
&/code&&/pre&&/div&&p&注意该滤波器没有归一化(&i&和不是 1.0 &/i&),故滤出来的值可能不在 [0, 255] 之内。通过减去最小值、除以最大/最小值之差、再乘以 255 并取整,把结果值归一到 [0, 255] 之内,使之成为一幅灰度图像。现在尝试用它来滤 lena 图。&/p&&p&&br&&/p&&figure&&img src=&https://pic4.zhimg.com/v2-a49e747d5f31e0fd84fc5d49c2a8e870_b.jpg& data-size=&normal& data-rawwidth=&960& data-rawheight=&550& class=&origin_image zh-lightbox-thumb& width=&960& data-original=&https://pic4.zhimg.com/v2-a49e747d5f31e0fd84fc5d49c2a8e870_r.jpg&&&figcaption&图 1.4&/figcaption&&/figure&&p&&br&&/p&&p&该滤波器把图像的边缘检测出来了。它就是 Sobel 算子。边缘检测、图像模糊等等都是人们设计出来的、有专门用途的滤波器。如果搞一个 9 x 9 的&b&随机&/b&滤波器,会是什么效果呢?&/p&&p&&br&&/p&&figure&&img src=&https://pic3.zhimg.com/v2-dd0aadc8cf_b.jpg& data-size=&normal& data-rawwidth=&1033& data-rawheight=&545& class=&origin_image zh-lightbox-thumb& width=&1033& data-original=&https://pic3.zhimg.com/v2-dd0aadc8cf_r.jpg&&&figcaption&图 1.5&/figcaption&&/figure&&p&&br&&/p&&p&如上图,效果也类似于模糊。因为把一个像素点的值用它周围 9 x 9 范围的值&b&随机&/b&加权求和,相当于“捣浆糊”。但可以看出模糊得并不润滑。&br&&/p&&p&这时我们不禁要想,如果不是由人来设计一个滤波器,而是从一个随机滤波器开始,根据某种目标、用某种方法去逐渐调整它,直到它接近我们想要的样子,可行么?这就是卷积神经网络(Convolutional Neural Network, CNN)的思想了。可调整的滤波器是 CNN 的“卷积”那部分;如何调整滤波器则是 CNN 的“神经网络”那部分。&/p&&hr&&blockquote&二、神经网络&/blockquote&&p&&br&&/p&&p&人工神经网络(Neural Network, NN)作为一个计算模型,其历史甚至要早于计算机。 W.S. McCulloch 和 W. Pitts 在四十年代就提出了人工神经元模型。但是单个人工神经元甚至无法计算异或。多个人工神经元连接成网络就可以克服无法计算异或的问题。但是对于这样的网络——多层感知机网络,当时的人们没有发现训练它的方法。人工智能领域的巨擘马文.明斯基认为这个计算模型是没有前途的。直到 7、80 年代,人们发现了训练多层感知机网络的&a href=&https://zhuanlan.zhihu.com/p/& class=&internal&&反向传播算法(BP)&/a&。BP 的本质是&a href=&https://zhuanlan.zhihu.com/p/& class=&internal&&梯度下降&/a&算法。多层感知机网络梯度的计算乍看十分繁琐,实则有规律。&/p&&p&&br&&/p&&p&人工神经元就是用一个数学模型简单模拟神经细胞。神经细胞有多个树突和一个伸长的轴突。一个神经元的轴突连接到其他神经元的树突,并向其传导神经脉冲。神经元会根据来自它的若干树突的信号决定是否从其轴突向其他神经元发出神经脉冲。&/p&&figure&&img src=&https://pic2.zhimg.com/v2-0add895a1da8ed88ad89f_b.jpg& data-size=&normal& data-rawwidth=&424& data-rawheight=&235& class=&origin_image zh-lightbox-thumb& width=&424& data-original=&https://pic2.zhimg.com/v2-0add895a1da8ed88ad89f_r.jpg&&&figcaption&图 2.1&/figcaption&&/figure&&p&&br&&/p&&p&一个&b&人工神经元&/b&就是对生物神经元的数学建模(&i&下文中“神经元”就指人工神经元,“神经网络”就指人工神经网络&/i&)。见图 2.2 。&/p&&figure&&img src=&https://pic1.zhimg.com/v2-6327a2edbd79a178b885ad_b.jpg& data-size=&normal& data-rawwidth=&197& data-rawheight=&136& class=&content_image& width=&197&&&figcaption&图 2.2&/figcaption&&/figure&&p&&br&&/p&&p&&img src=&https://www.zhihu.com/equation?tex=p_%7B1%7D+%2Cp_%7B2%7D+%2C+%5C+...+%5C+%2Cp_%7Bn%7D+& alt=&p_{1} ,p_{2} , \ ... \ ,p_{n} & eeimg=&1&&是神经元的输入。a 是神经元的输出。神经元将输入&img src=&https://www.zhihu.com/equation?tex=p_%7B1%7D+%2Cp_%7B2%7D+%2C+%5C+...+%5C+%2C+p_%7Bn%7D+& alt=&p_{1} ,p_{2} , \ ... \ , p_{n} & eeimg=&1&&加权求和后再加上偏置值 b ,最后再施加一个函数 f ,即:&/p&&p&&img src=&https://www.zhihu.com/equation?tex=a%3Df%28n%29%3Df+%5Cleft%28+%5Csum_%7Bi%3D1%7D%5E%7Bn%7D%7Bp_iw_i%7D%2Bb+%5Cright%29+%3D+f+%5Cleft%28+%5Cbegin%7Barray%7D%7Bccc%7D+%28w_1%2Cw_2+%5Ccdots+w_n%29+%5Cend%7Barray%7D+%5Cleft%28+%5Cbegin%7Barray%7D%7Bccc%7D+p_1+%5C%5C+p_2+%5C%5C+%5Cvdots+%5C%5C+p_n%5Cend%7Barray%7D+%5Cright%29%2Bb+%5Cright%29+%3D+f+%5Cleft%28+%5Cmathcal%7BW%7D%5ET%5Cmathcal%7BP%7D%2Bb%5Cright%29& alt=&a=f(n)=f \left( \sum_{i=1}^{n}{p_iw_i}+b \right) = f \left( \begin{array}{ccc} (w_1,w_2 \cdots w_n) \end{array} \left( \begin{array}{ccc} p_1 \\ p_2 \\ \vdots \\ p_n\end{array} \right)+b \right) = f \left( \mathcal{W}^T\mathcal{P}+b\right)& eeimg=&1&&&/p&&p&&br&&/p&&p&上式最后是这个式子的向量形式。P 是输入向量,W 是权值向量,b 是偏置值标量 。f 称为激活函数( &i&Activation Function &/i&)。激活函数可以采用多种形式。图 2.3 展示了一些常用的激活函数。&/p&&p&&br&&/p&&figure&&img src=&https://pic3.zhimg.com/v2-ddfddf84fd98fa6be6c0af332fd1425e_b.jpg& data-size=&normal& data-rawwidth=&1000& data-rawheight=&1000& class=&origin_image zh-lightbox-thumb& width=&1000& data-original=&https://pic3.zhimg.com/v2-ddfddf84fd98fa6be6c0af332fd1425e_r.jpg&&&figcaption&图 2.3&/figcaption&&/figure&&p&&br&&/p&&p&这是单个神经元的定义。神经网络就是把许多这样的神经元连接成一个网络:一个神经元的输出作为另一个神经元的输入。神经网络可以有多种多样的拓扑结构。其中最简单的就是“多层全连接前向神经网络”。它的输入连接到网络第一层的每个神经元。前一层的每个神经元的输出连接到下一层每个神经元的输入。最后一层神经元的输出就是整个神经网络的输出。&/p&&p&图 2.4 是一个三层神经网络。它接受 10 个输入,也就是一个 10 元向量。第一层和第二层各有 12 个神经元。最后一层有 6 个神经元,就是说这个神经网络输出一个 6 元向量。神经网络最后一层称为输出层,中间的层称为隐藏层。&br&&/p&&figure&&img src=&https://pic1.zhimg.com/v2-432b807ddd1ad_b.jpg& data-size=&normal& data-rawwidth=&660& data-rawheight=&645& class=&origin_image zh-lightbox-thumb& width=&660& data-original=&https://pic1.zhimg.com/v2-432b807ddd1ad_r.jpg&&&figcaption& 图 2.4&/figcaption&&/figure&&p&&br&&/p&&p&整个神经网络的计算可以用矩阵式给出。我们给出神经网络单层的式子。每层的神经元个数不一样,输入/输出维度也就不一样,计算式中的矩阵和向量的行列数也就不一样,但形式是一致的。假设我们考虑的这一层是第 i 层。它接受 m 个输入,拥有 n 个神经元( &i&n 个输出&/i&),那么这一层的计算如下式所示:&br&&/p&&p&&img src=&https://www.zhihu.com/equation?tex=%5Cmathcal%7BO%7D%5Ei%3D%5Cleft%28%5Cbegin%7Barray%7D%7Bccc%7D+o_1%5E%7Bi%7D+%5C%5C+%5Cvdots+%5C%5C+o_n%5E%7Bi%7D+%5Cend%7Barray%7D%5Cright%29%3Df%5Cleft%28%5Cleft%28%5Cbegin%7Barray%7D%7Bccc%7D+w_%7B11%7D%5Ei+%26+%5Ccdots+%26+w_%7B1m%7D%5Ei%5C%5C+%5Cvdots+%26+%5Cddots+%26+%5Cvdots%5C%5C+w_%7Bn1%7D%5Ei+%26+%5Ccdots+%26+w_%7Bnm%7D+%5Ei+%5Cend%7Barray%7D%5Cright%29+%5Cleft%28%5Cbegin%7Barray%7D%7Bccc%7D+o_1%5E%7Bi-1%7D+%5C%5C+%5Cvdots+%5C%5C+o_m%5E%7Bi-1%7D+%5Cend%7Barray%7D%5Cright%29+%2B%5Cleft%28%5Cbegin%7Barray%7D%7Bccc%7D+b_1%5E%7Bi%7D+%5C%5C+%5Cvdots+%5C%5C+b_n%5E%7Bi%7D+%5Cend%7Barray%7D%5Cright%29%5Cright%29+& alt=&\mathcal{O}^i=\left(\begin{array}{ccc} o_1^{i} \\ \vdots \\ o_n^{i} \end{array}\right)=f\left(\left(\begin{array}{ccc} w_{11}^i & \cdots & w_{1m}^i\\ \vdots & \ddots & \vdots\\ w_{n1}^i & \cdots & w_{nm} ^i \end{array}\right) \left(\begin{array}{ccc} o_1^{i-1} \\ \vdots \\ o_m^{i-1} \end{array}\right) +\left(\begin{array}{ccc} b_1^{i} \\ \vdots \\ b_n^{i} \end{array}\right)\right) & eeimg=&1&&&/p&&p&上标 i 表示第 i 层。 &img src=&https://www.zhihu.com/equation?tex=%5Cmathcal%7BO%7D%5Ei& alt=&\mathcal{O}^i& eeimg=&1&&是输出向量,n 元,因为第 i 层有 n 个神经元。第 i 层的输入,即第 i-1 层的输出,是 m 元向量。权值矩阵 W 是 n x m 矩阵:n 个神经元,每个神经元有 m 个权值。W 乘以第 i - 1 层输出的 m 向量,得到一个 n 向量,加上 n 元偏置向量 b ,再对结果的每一个元素施以激活函数 f ,最终得到第 i 层的 n 元输出向量。&/p&&p&若不嫌繁琐,可以将第 i - 1 层的输出也展开,最终能写出一个巨大的式子。它就是整个全连接前向神经网络的计算式。可以看出整个神经网络其实就是一个向量到向量的函数。至于它是什么函数,就取决于网络拓扑结构和每一个神经元的权值和偏置值。如果随机给出权值和偏置值,那么这个神经网络是无用的。我们想要的是有用的神经网络。它应该表现出我们想要的行为。&/p&&p&要达到这个目的,首先准备一个从目标函数采样的包含若干“输入-输出对”的集合——训练集。把训练集的输入送给神经网络,得到的输出肯定不是正确的输出。因为一开始这个神经网络的行为是随机的。&/p&&p&把一个训练样本输入给神经网络,计算输出与目标输出的(&i&向量&/i&)差的模平方(&i&自己与自己的内积&/i&)。再把全部 n 个样本的差的模平方求平均,得到 e :&/p&&p&&img src=&https://www.zhihu.com/equation?tex=e%3D%5Cfrac%7B1%7D%7B2n%7D+%5Csum_%7Bi%3D1%7D%5E%7Bn%7D+%7C%7Co_i%5E%7Breal%7D-o_i%5E%7Boutput%7D%7C%7C%5E2& alt=&e=\frac{1}{2n} \sum_{i=1}^{n} ||o_i^{real}-o_i^{output}||^2& eeimg=&1&&&/p&&p&e 称为均方误差 mse 。全部输出向量和目标输出向量之间的距离(&i&差的模&/i&)越小,则 e 越小。e 越小则神经网络的行为与想要的行为越接近。&br&&/p&&p&目标是使 e 变小。在这里 e 可以看做是全体权值和偏置值的一个函数。这就成为了一个无约束优化问题。如果能找到一个全局最小点,e 值在可接受的范围内,就可以认为这个神经网络训练好了。它能够很好地拟合目标函数。这里待优化的函数也可以是 mse 外的其他函数,统称 Cost Function,都可以用 e 表示。&/p&&p&经典的神经网络的训练算法是反向传播算法(Back Propagation, BP)。BP 算法属于优化理论中的梯度下降法(Gradient Descend)。将误差 e 作为全部权值和全部偏置值的函数。算法的目的是在自变量空间内找到 e 的全局极小点。&/p&&p&首先随机初始化全体权值和全体偏置值,之后在自变量空间中沿误差函数 e 在该点的梯度方向的反方向前进一个步长。梯度的反方向上函数方向导数最小,函数值下降最快。步长称为学习速率(Learning Rate, LR)。如此反复迭代,最终(&i&至少是期望&/i&)解运动到误差曲面的全局最小点(&i&请参考专栏文章:&a href=&https://zhuanlan.zhihu.com/p/& class=&internal&&神经网络之梯度下降与反向传播(上)&/a&&/i&)。&/p&&p&&br&&/p&&p&图 2.5 是用 matlab 训练一个极简单的神经网络。它只有单输入单输出。输入层有两个神经元,输出层有一个神经元。整个网络有 4 个权值加 3 个偏置。图中展示了固定其他权值,只把第一层第一个神经元的权值&img src=&https://www.zhihu.com/equation?tex=w_%7B%281%2C1%29%7D%5E1& alt=&w_{(1,1)}^1& eeimg=&1&&和偏置&img src=&https://www.zhihu.com/equation?tex=b_1%5E1& alt=&b_1^1& eeimg=&1&&做自变量时候的 e 曲面,以及随着算法迭代,解的运动轨迹。&/p&&figure&&img src=&https://pic3.zhimg.com/v2-4b3feaf8ef75fa92f45e70_b.jpg& data-size=&normal& data-rawwidth=&499& data-rawheight=&477& class=&origin_image zh-lightbox-thumb& width=&499& data-original=&https://pic3.zhimg.com/v2-4b3feaf8ef75fa92f45e70_r.jpg&&&figcaption& 图 2.5&/figcaption&&/figure&&p&&br&&/p&&p&&br&&/p&&p&最终算法没有收敛到全局最优解(&i&红 +&/i&)。但是解已经运动到了一个峡谷的底部。由于底部过于平缓,解“走不动”了。所得解比最优也差不到哪去。&/p&&p&对于一个稍复杂的神经网络,e 对权值和偏置值的函数是一个复杂的函数。求梯度需要计算 e 对每一个权值和偏置值的偏导数。所幸的是求偏导的公式不会因为这个权值或偏置值距离输出层越远而越复杂。对于每个神经元可以计算一个值 delta ,称“局部误差”或“灵敏度”。得到了每个神经元的 delta 就很容易计算 e 对任何一个权值或偏执值的偏导数。&/p&&p&&br&&/p&&p&计算某神经元的 delta 用到该神经元激活函数的导函数,对于输出层用到输出与目标输出的差;对于隐藏层用到下一层各神经元的 delta 。每个神经元将其 delta 传递给前一层的各神经元。前一层的各神经元收集后一层的神经元的 delta 计算自己的 delta 。这就是“反向传播”名称的由来——“局部误差”或“灵敏度” delta 沿着反向向前传,逐层计算所有权值和偏置值的偏导数,最终得到梯度。详细推导可参考书籍[1]第八章、[2]第十一章、[3]第四章、[4]第三章或[5]第十一章(&i&或参考专栏文章:&a href=&https://zhuanlan.zhihu.com/p/& class=&internal&&神经网络之梯度下降与反向传播(下)&/a&&/i&)。&/p&&p&梯度下降法有很多变体。通过调整学习速率 LR 可以提高收敛速度;通过增加冲量可以避免陷入局部最优点以及减少震荡。还可以每一次不计算全部样本的 e ,而是随机取一部分样本,根据它们的 e 更新权值。梯度下降是基于误差函数的一阶性质。还有其他方法基于二阶性质进行优化,比如牛顿法等等。优化作为一门应用数学学科是机器学习的一个重要理论基础,在理论和实现上均有众多结论和方法。参考[1]。&/p&&hr&&blockquote&三、卷积神经网络&/blockquote&&p&&br&&/p&&p&现在把卷积滤波器和神经网络两个思想结合起来。卷积滤波器无非就是一套权值。而神经网络也可以有(&i&除全连接外的&/i&)其它拓扑结构。可以构造如图 3.1 所示意的神经网络。&/p&&figure&&img src=&https://pic1.zhimg.com/v2-32dfed0f256a990d59d1cab_b.jpg& data-size=&normal& data-rawwidth=&480& data-rawheight=&501& class=&origin_image zh-lightbox-thumb& width=&480& data-original=&https://pic1.zhimg.com/v2-32dfed0f256a990d59d1cab_r.jpg&&&figcaption&图 3.1&/figcaption&&/figure&&p&&br&&/p&&p&该神经网络接受 n x n 个输入,产生 n x n 个输出。图中左边的平面包含 n x n 个格子,每个格子中是一个 [0, 255] 的整数值。它就是输入图像,也是这个神经网络的输入。右边的平面也是 n x n 个格子,每个格子是一个神经元。每个神经元连接到输入上它对应位置周围 3 x 3 范围内的值。每个连接有一个权值。所有神经元都如此连接(&i&图中只画了一个,出了输入图像边缘的连接就认为连接到常数 0 &/i&)。右边层的每个神经元将与它连接的 3 x 3 个输入的值乘上连接权重并加和,得到该神经元的输出。n x n 个神经元的输出就是该神经网络的输出。&/p&&p&这个神经网络有两点与全连接神经网络不同。首先它不是全连接的。右层的神经元并非连接上全部输入,而是只连接了一部分。这里的一部分就是输入图像的一个局部区域。我们常听说 CNN 能够把握图像局部特征就是这个意思。这样一来权值少了很多,因为连接少了。权值其实还更少,因为每一个神经元的 9 个权值都是和其他神经元共享的。全部 n x n 个神经元都用这共同的一组 9 个权值,并且不要偏置值。那么这个神经网络其实一共只有 9 个参数需要调整。&/p&&p&看了第一节的同学们都看出来了,这个神经网络所进行的计算不就是一个卷积滤波器么?只不过卷积核的参数未定,需要我们去训练——它是一个“可训练滤波器”。这个神经网络其实就是一个只有一个卷积层、且该卷积层只有一个滤波器(&i&通道&/i&)的 CNN 。&/p&&p&&br&&/p&&p&试着用 Sobel 算子滤出来的图片作为目标值去训练这个神经网络。给神经网络的输入是灰度 lena 图,目标输出是经过 Sobel 算子滤波的 lena 图,见图 1.4 。这唯一的一对输入输出图片就构成了训练集。神经网络权值随机初始化,训练 2000 轮。如图 3.7 。&/p&&p&&br&&/p&&figure&&img src=&https://pic2.zhimg.com/v2-eed01c490ceeefc3afe3ec6_b.jpg& data-size=&normal& data-rawwidth=&825& data-rawheight=&622& class=&origin_image zh-lightbox-thumb& width=&825& data-original=&https://pic2.zhimg.com/v2-eed01c490ceeefc3afe3ec6_r.jpg&&&figcaption&图 3.2&/figcaption&&/figure&&p&&br&&/p&&p&&br&&/p&&p&&br&&/p&&p&从左上到右下依次为:初始随机滤波器输出、每个 200 轮训练后的滤波器输出( &i&10 幅&/i&)、最后一幅是 Sobel 算子的输出,也就是用作训练的目标图像。可以看到经过最初 200 轮后,神经网络的输出就已经和 Sobel 算子的输出看不出什么差别了。后面那些轮的输出基本一样。输入与输出的均方误差 mse 随着训练轮次的变化。如图 3.3 。&/p&&p&&br&&/p&&figure&&img src=&https://pic4.zhimg.com/v2-90f9bb0a33bb9a8189de9_b.jpg& data-size=&normal& data-rawwidth=&640& data-rawheight=&480& class=&origin_image zh-lightbox-thumb& width=&640& data-original=&https://pic4.zhimg.com/v2-90f9bb0a33bb9a8189de9_r.jpg&&&figcaption&图 3.3&/figcaption&&/figure&&p&&br&&/p&&p&&br&&/p&&p&&br&&/p&&p&1500 轮过后,mse 基本就是 0 了。训练完成后网络的权值是:&/p&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&o&&+-------+--------+--------+&/span&
&span class=&o&&|&/span& &span class=&mf&&1.29&/span&
&span class=&o&&|&/span&
&span class=&mf&&0.04&/span&
&span class=&o&&|&/span&
&span class=&o&&-&/span&&span class=&mf&&1.31&/span& &span class=&o&&|&/span&
&span class=&o&&+-------+--------+--------+&/span&
&span class=&o&&|&/span& &span class=&mf&&1.43&/span&
&span class=&o&&|&/span&
&span class=&mf&&0.01&/span&
&span class=&o&&|&/span&
&span class=&o&&-&/span&&span class=&mf&&1.45&/span& &span class=&o&&|&/span&
&span class=&o&&+-------+--------+--------+&/span&
&span class=&o&&|&/span& &span class=&mf&&1.34&/span&
&span class=&o&&|&/span&
&span class=&o&&-&/span&&span class=&mf&&0.07&/span& &span class=&o&&|&/span&
&span class=&o&&-&/span&&span class=&mf&&1.28&/span& &span class=&o&&|&/span&
&span class=&o&&+-------+--------+--------+&/span&
&/code&&/pre&&/div&&p&与 Sobel 算子比较一下:&/p&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&o&&+----+----+----+&/span&
&span class=&o&&|&/span& &span class=&o&&-&/span&&span class=&mi&&1&/span& &span class=&o&&|&/span&
&span class=&mi&&0&/span& &span class=&o&&|&/span&
&span class=&mi&&1&/span& &span class=&o&&|&/span&
&span class=&o&&+----+----+----+&/span&
&span class=&o&&|&/span& &span class=&o&&-&/span&&span class=&mi&&2&/span& &span class=&o&&|&/span&
&span class=&mi&&0&/span& &span class=&o&&|&/span&
&span class=&mi&&2&/span& &span class=&o&&|&/span&
&span class=&o&&+----+----+----+&/span&
&span class=&o&&|&/span& &span class=&o&&-&/span&&span class=&mi&&1&/span& &span class=&o&&|&/span&
&span class=&mi&&0&/span& &span class=&o&&|&/span&
&span class=&mi&&1&/span& &span class=&o&&|&/span&
&span class=&o&&+----+----+----+&/span&
&/code&&/pre&&/div&&p&注意训练出来的滤波器负数列在右侧而不是左侧。因为计算卷积是把滤波器上下左右翻转反着扣上去的。这并不重要,本质是相同的。关键是一正列、一负列,中间零值列。非零值列三个值之比近似 1:2:1 。我们得到的就是一个近似的 Sobel 算子。我们以训练神经网络的方式把一个随机滤波器训练成了 Sobel 算子。这就是优化的魔力(&i&代码见本文最后&/i&)。&/p&&p&在 CNN 中,这样的滤波器层叫做卷积层。一个卷积层可以有多个滤波器,每一个叫做一个 channel 。图像是二维信号。信号也可以是其他维度的,比如一维、三维乃至更高维度。那么滤波器相应的也有各种维度。回到二维图像的例子,实际上一个卷积层面对的是多个 channel 的 “一摞” 二维图像。比如一幅 100 x 100 大小的彩色图就会有 RGB 三个 channel ,其数据维度是 3 x 100 x 100 。那么直接连接彩色图像输入的卷积层面对的是 3 x 100 x 100 的数据,这时它的滤波器是 3 维度,第一维等于输入 channel 数(&i&这里是 3&/i&)。第 2、3 维度是指定的滤波器大小,例如 5 x 5 。卷积层把输入的多 channel 的一摞二维图像用三维滤波器滤出一幅二维图像。假如这层有 32 个滤波器,那么这层输出 32 个 channel ,每个 channel 是一个二维图像。&/p&&p&激活函数构成 CNN 的一种层——激活层,这样的层没有可训练的参数。它为输入施加激活函数,例如 Sigmoid 、Tanh 等。&/p&&p&&br&&/p&&p&还有一种层叫做 Pooling 层(&i&池化层&/i&)。它也没有参数,起到降维的作用。将输入切分成不重叠的一些 n x n 区域。每一个区域就包含 n x n 个值。从这 n x n 个值计算出一个值。计算方法可以是求平均、取最大等等。假设 n = 2,那么 4 个输入变成一个输出。输出图像就是输入图像的 1/4 大小。若把 2 维的层展平成一维向量,后面可再连接一个全连接前向神经网络。&/p&&p&通过把这些组件进行组合就得到了一个 CNN 。它直接以原始图像为输入,以最终的回归或分类问题的结论为输出,内部兼有滤波图像处理和函数拟合,所有参数放在一起训练。这就是卷积神经网络。&/p&&hr&&blockquote&四、举个栗子&/blockquote&&p&&br&&/p&&p&手写数字识别。数据集中一共有 42000 个 28 x 28 的手写数字灰度图片。十个数字( &i&0~9 &/i&)的样本数量大致相等。为减少训练时间,随机抽取其中 10000 个。图 4.1 展示其中一部分。&/p&&figure&&img src=&https://pic2.zhimg.com/v2-a42b31aaf7f2d51b04e7_b.jpg& data-size=&normal& data-rawwidth=&800& data-rawheight=&550& class=&origin_image zh-lightbox-thumb& width=&800& data-original=&https://pic2.zhimg.com/v2-a42b31aaf7f2d51b04e7_r.jpg&&&figcaption&图 4.1&/figcaption&&/figure&&p&&br&&/p&&p&&br&&/p&&p&&br&&/p&&p&将样本集合的 75% 用作训练,剩下的 25% 用作测试。构造一个结构如图 4.2 的 CNN 。&/p&&figure&&img src=&https://pic2.zhimg.com/v2-bd2fbd894eef_b.jpg& data-size=&normal& data-rawwidth=&568& data-rawheight=&1043& class=&origin_image zh-lightbox-thumb& width=&568& data-original=&https://pic2.zhimg.com/v2-bd2fbd894eef_r.jpg&&&figcaption&图 4.2&/figcaption&&/figure&&p&&br&&/p&&p&&br&&/p&&p&&br&&/p&&p&&br&&/p&&p&该 CNN 共有 9 层(&i&不包括输入层&/i&)。它接受 784 元向量作为输入,就是一幅 28 x 28 的灰度图片。并没有将图片先变形成 28 x 28 再输入,因为在 CNN 的第一层放了一个 reshape 层。该层负责将 784 元的输入向量变形成 1 x 28 x 28 的阵列。最开始那个 1 x 表示只有一个通道 ,因为这是灰度图像。如果是彩色图像,就有 RGB 三个通道 。&/p&&p&接下来放一个卷积层。它包含 32 个 3 x 3 的滤波器,所以它的输出维度是 32 x 28 x 28 。32 个滤波器搞出来 32 幅图像(&i&通道&/i&),每个都是 28 x 28 大小。后续一个 2 x 2 的取平均值 Pooling 层把维度减小一半:32 x 14 x 14 。&/p&&p&&br&&/p&&p&接着是第二个卷积层。它包含 64 个 32 x 3 x 3 的滤波器。它的输出维度是 64 x 14 x 14 。注意该卷积层的输入是 32 个 channel ,每个 14 x 14 大小。可以看作 32 x 14 x 14 的一个 3 维输入。该层的滤波器是 32 x 3 x 3 的一个 3 维滤波器。该层的输出维度是 64 x 14 x 14 。后面再续一个 2 x 2 的取平均值 Pooling 层,输出维度:64 x 7 x 7 。&/p&&p&&br&&/p&&p&接着是一个展平层,没有运算也没有参数,只变化一下数据形状:把 64 x 7 x 7 展平成了 3136 元向量。该 3136 元向量送给后面一个三层的全连接神经网络。该网络的结构是 1000 x 1000 x 10 。两个隐藏层各有 1000 个神经元,最后的输出层有 10 个神经元,代表 10 个数字。假如第六个输出为 1 ,其余输出为 0 ,就表示网络判定这个手写数字为 “5”(&i&数字 “0” 占第一个输出,所以 “5” 占第六个输出&/i&)。数字 “5” 就编码成了:&/p&&p&&img src=&https://www.zhihu.com/equation?tex=%5Cleft%28+%5Cbegin%7Barray%7D%7Bccc%7D+0%5C%5C0%5C%5C0%5C%5C0%5C%5C0%5C%5C1%5C%5C0%5C%5C0%5C%5C0%5C%5C0+%5Cend%7Barray%7D%5Cright%29& alt=&\left( \begin{array}{ccc} 0\\0\\0\\0\\0\\1\\0\\0\\0\\0 \end{array}\right)& eeimg=&1&&&/p&&p&训练集和测试集的数字标签都这么编码( &i&one-hot 编码&/i&)。&/p&&p&全连接神经网络的隐藏层的激活函数采用 Sigmoid ,输出层的激活函数采用 Linear 。误差函数采用均方误差 mse 。优化算法采用随机梯度下降 SGD 。SGD 是梯度下降的一个变体。它并不是用全体样本计算 e 的梯度,而是每次迭代使用随机选择的一部分样本来计算。学习速率 LR 初始为 0.01 ,每次迭代以 1e-6 的比例衰减。以 0.9 为参数设置冲量。训练过程持续 10 轮( &i&epoch &/i&)。注意这里 10 轮不是指当前解在解空间只运动 10 步。一轮是指全部 7500 个训练样本都送进网络迭代一次。每次权值更新以 32 个样本为一个 batch 提交给算法。&/p&&p&图 4.3 展示了随着训练进行,mse 以及分类正确率( &i&accuracy&/i& )的变化情况( &i&横坐标取了 log&/i& )。&/p&&figure&&img src=&https://pic3.zhimg.com/v2-3bd1bab1caeafe_b.jpg& data-size=&normal& data-rawwidth=&640& data-rawheight=&480& class=&origin_image zh-lightbox-thumb& width=&640& data-original=&https://pic3.zhimg.com/v2-3bd1bab1caeafe_r.jpg&&&figcaption&图 4.3&/figcaption&&/figure&&p&&br&&/p&&p&&br&&/p&&p&&br&&/p&&p&&br&&/p&&p&该 CNN 在测试集上的正确率( &i&accuracy&/i& )是 96.12%,各数字的准确率( &i&precision&/i& ) / 召回率( &i&recall&/i& )/ f1-score 如下:&br&&/p&&p&&br&&/p&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&
&span class=&n&&precision&/span&
&span class=&n&&recall&/span&
&span class=&n&&f1&/span&&span class=&o&&-&/span&&span class=&n&&score&/span&
&span class=&n&&support&/span&
&span class=&mi&&0&/span&
&span class=&mf&&0.96&/span&
&span class=&mf&&0.98&/span&
&span class=&mf&&0.97&/span&
&span class=&mi&&252&/span&
&span class=&mi&&1&/span&
&span class=&mf&&0.99&/span&
&span class=&mf&&0.99&/span&
&span class=&mf&&0.99&/span&
&span class=&mi&&281&/span&
&span class=&mi&&2&/span&
&span class=&mf&&0.98&/span&
&span class=&mf&&0.94&/span&
&span class=&mf&&0.96&/span&
&span class=&mi&&240&/span&
&span class=&mi&&3&/span&
&span class=&mf&&0.97&/span&
&span class=&mf&&0.95&/span&
&span class=&mf&&0.96&/span&
&span class=&mi&&258&/span&
&span class=&mi&&4&/span&
&span class=&mf&&0.96&/span&
&span class=&mf&&0.92&/span&
&span class=&mf&&0.94&/span&
&span class=&mi&&239&/span&
&span class=&mi&&5&/span&
&span class=&mf&&0.98&/span&
&span class=&mf&&0.95&/span&
&span class=&mf&&0.97&/span&
&span class=&mi&&219&/span&
&span class=&mi&&6&/span&
&span class=&mf&&0.96&/span&
&span class=&mf&&0.99&/span&
&span class=&mf&&0.97&/span&
&span class=&mi&&273&/span&
&span class=&mi&&7&/span&
&span class=&mf&&0.97&/span&
&span class=&mf&&0.97&/span&
&span class=&mf&&0.97&/span&
&span class=&mi&&259&/span&
&span class=&mi&&8&/span&
&span class=&mf&&0.92&/span&
&span class=&mf&&0.96&/span&
&span class=&mf&&0.94&/span&
&span class=&mi&&231&/span&
&span class=&mi&&9&/span&
&span class=&mf&&0.91&/span&
&span class=&mf&&0.97&/span&
&span class=&mf&&0.94&/span&
&span class=&mi&&248&/span&
&span class=&n&&avg&/span& &span class=&o&&/&/span& &span class=&n&&total&/span&
&span class=&mf&&0.96&/span&
&span class=&mf&&0.96&/span&
&span class=&mf&&0.96&/span&
&span class=&mi&&2500&/span&
&/code&&/pre&&/div&&p&&br&&/p&&p&训练完成神经网络后,最有趣的是将其内部权值以某种方式展现出来。看着那些神秘的、不明所以的连接强度最后竟产生表观上有意义的行为,不由让我们联想起大脑中的神经元连接竟构成了我们的记忆、人格、情感 ... 引人遐思。&br&&/p&&p&在 CNN 上就更适合做这种事情。因为卷积层训练出来的是滤波器。用这些滤波器把输入图像滤一滤,看看 CNN 到底“看到”了什么。图 4.4 是该 CNN 第一卷积层对一个手写数字 “5” 的 32 个输出。&/p&&figure&&img src=&https://pic1.zhimg.com/v2-b53c7213146bfd6c7a56c_b.jpg& data-size=&normal& data-rawwidth=&640& data-rawheight=&480& class=&origin_image zh-lightbox-thumb& width=&640& data-original=&https://pic1.zhimg.com/v2-b53c7213146bfd6c7a56c_r.jpg&&&figcaption&图 4.4&/figcaption&&/figure&&p&&br&&/p&&p&&br&&/p&&p&&br&&/p&&p&接下来看一看第二卷积层输出的 64 幅图像。&/p&&p&&br&&/p&&figure&&img src=&https://pic4.zhimg.com/v2-ee816dca9a36cb021274_b.jpg& data-size=&normal& data-rawwidth=&640& data-rawheight=&480& class=&origin_image zh-lightbox-thumb& width=&640& data-original=&https://pic4.zhimg.com/v2-ee816dca9a36cb021274_r.jpg&&&figcaption&图 4.5&/figcaption&&/figure&&p&&br&&/p&&p&&br&&/p&&p&这些就是 CNN 经两步滤波后“看到”的信息。现在将展平层的 3136 元输出呈现出来。呈现方式是:“0”~“9” 十个数字各取 100 个(&i&共 1000 个&/i&),将对每一个样本的输出作为一行,得到一副 1000 x 3136 大小的图像,根据数值用伪彩色呈现出来。如图 4.6 。&/p&&p&&br&&/p&&figure&&img src=&https://pic1.zhimg.com/v2-0bb59e1fdc680b199272_b.jpg& data-size=&normal& data-rawwidth=&1130& data-rawheight=&570& class=&origin_image zh-lightbox-thumb& width=&1130& data-original=&https://pic1.zhimg.com/v2-0bb59e1fdc680b199272_r.jpg&&&figcaption&图 4.6&/figcaption&&/figure&&p&&br&&/p&&p&&br&&/p&&p&&br&&/p&&p&是否能从中看到 10 个条带,每个条带对应同一个数字的 100 个样本?再把两个全连接层的输出以同样的方式显示出来,是两个 1000 x 1000 的伪彩色图。如图 4.7 。&/p&&p&&br&&/p&&figure&&img src=&https://pic1.zhimg.com/v2-5a93f71cbef5a27ff5f489f525a98d54_b.jpg& data-size=&normal& data-rawwidth=&903& data-rawheight=&563& class=&origin_image zh-lightbox-thumb& width=&903& data-original=&https://pic1.zhimg.com/v2-5a93f71cbef5a27ff5f489f525a98d54_r.jpg&&&figcaption&图 4.7&/figcaption&&/figure&&p&&br&&/p&&p&&br&&/p&&p&&br&&/p&&p&经过各卷积层、采样层和全连接层,信息表示的抽象程度逐层提高。CNN 就这样“认出”了手写数字。多层的 CNN 逐层提高了“逻辑深度”,这就是 “Deep Learning” 的含义。&/p&&p&最后把代码附上。CNN 实现使用的是 keras 库。数据集来自 kaggle :&a href=&https://link.zhihu.com/?target=https%3A//www.kaggle.com/c/digit-recognizer/data& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&这里&/a&。&/p&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&kn&&import&/span& &span class=&nn&&pandas&/span& &span class=&kn&&as&/span& &span class=&nn&&pd&/span&
&span class=&kn&&from&/span& &span class=&nn&&keras.models&/span& &span class=&kn&&import&/span& &span class=&n&&Sequential&/span&
&span class=&kn&&from&/span& &span class=&nn&&keras.layers&/span& &span class=&kn&&import&/span& &span class=&n&&Dense&/span&&span class=&p&&,&/span& &span class=&n&&Flatten&/span&&span class=&p&&,&/span& &span class=&n&&Reshape&/span&&span class=&p&&,&/span& &span class=&n&&AveragePooling2D&/span&&span class=&p&&,&/span& &span class=&n&&Convolution2D&/span&&span class=&p&&,&/span& &span class=&n&&Activation&/span&
&span class=&kn&&from&/span& &span class=&nn&&keras.utils.np_utils&/span& &span class=&kn&&import&/span& &span class=&n&&to_categorical&/span&
&span class=&kn&&from&/span& &span class=&nn&&keras.utils.visualize_util&/span& &span class=&kn&&import&/span& &span class=&n&&plot&/span&
&span class=&kn&&from&/span& &span class=&nn&&sklearn.model_selection&/span& &span class=&kn&&import&/span& &span class=&n&&train_test_split&/span&
&span class=&kn&&from&/span& &span class=&nn&&sklearn.metrics&/span& &span class=&kn&&import&/span& &span class=&n&&classification_report&/span&&span class=&p&&,&/span& &span class=&n&&accuracy_score&/span&&span class=&p&&,&/span& &span class=&n&&confusion_matrix&/span&
&span class=&kn&&from&/span& &span class=&nn&&keras.callbacks&/span& &span class=&kn&&import&/span& &span class=&n&&Callback&/span&
&span class=&kn&&from&/span& &span class=&nn&&keras.optimizers&/span& &span class=&kn&&import&/span& &span class=&n&&SGD&/span&
&span class=&k&&class&/span& &span class=&nc&&LossHistory&/span&&span class=&p&&(&/span&&span class=&n&&Callback&/span&&span class=&p&&):&/span&
&span class=&k&&def&/span& &span class=&nf&&__init__&/span&&span class=&p&&(&/span&&span class=&bp&&self&/span&&span class=&p&&):&/span&
&span class=&n&&Callback&/span&&span class=&o&&.&/span&&span class=&n&&__init__&/span&&span class=&p&&(&/span&&span class=&bp&&self&/span&&span class=&p&&)&/span&
&span class=&bp&&self&/span&&span class=&o&&.&/span&&span class=&n&&losses&/span& &span class=&o&&=&/span& &span class=&p&&[]&/span&
&span class=&bp&&self&/span&&span class=&o&&.&/span&&span class=&n&&accuracies&/span& &span class=&o&&=&/span& &span class=&p&&[]&/span&
&span class=&k&&def&/span& &span class=&nf&&on_train_begin&/span&&span class=&p&&(&/span&&span class=&bp&&self&/span&&span class=&p&&,&/span& &span class=&n&&logs&/span&&span class=&o&&=&/span&&span class=&bp&&None&/span&&span class=&p&&):&/span&
&span class=&k&&pass&/span&
&span class=&k&&def&/span& &span class=&nf&&on_batch_end&/span&&span class=&p&&(&/span&&span class=&bp&&self&/span&&span class=&p&&,&/span& &span class=&n&&batch&/span&&span class=&p&&,&/span& &span class=&n&&logs&/span&&span class=&o&&=&/span&&span class=&bp&&None&/span&&span class=&p&&):&/span&
&span class=&bp&&self&/span&&span class=&o&&.&/span&&span class=&n&&losses&/span&&span class=&o&&.&/span&&span class=&n&&append&/span&&span class=&p&&(&/span&&span class=&n&&logs&/span&&span class=&o&&.&/span&&span class=&n&&get&/span&&span class=&p&&(&/span&&span class=&s1&&'loss'&/span&&span class=&p&&))&/span&
&span class=&bp&&self&/span&&span class=&o&&.&/span&&span class=&n&&accuracies&/span&&span class=&o&&.&/span&&span class=&n&&append&/span&&span class=&p&&(&/span&&span class=&n&&logs&/span&&span class=&o&&.&/span&&span class=&n&&get&/span&&span class=&p&&(&/span&&span class=&s1&&'acc'&/span&&span class=&p&&))&/span&
&span class=&n&&history&/span& &span class=&o&&=&/span& &span class=&n&&LossHistory&/span&&span class=&p&&()&/span&
&span class=&n&&data&/span& &span class=&o&&=&/span& &span class=&n&&pd&/span&&span class=&o&&.&/span&&span class=&n&&read_csv&/span&&span class=&p&&(&/span&&span class=&s2&&&train.csv&&/span&&span class=&p&&)&/span&
&span class=&n&&data&/span& &span class=&o&&=&/span& &span class=&n&&data&/span&&span class=&o&&.&/span&&span class=&n&&sample&/span&&span class=&p&&(&/span&&span class=&n&&n&/span&&span class=&o&&=&/span&&span class=&mi&&10000&/span&&span class=&p&&,&/span& &span class=&n&&replace&/span&&span class=&o&&=&/span&&span class=&bp&&False&/span&&span class=&p&&)&/span&
&span class=&n&&digits&/span& &span class=&o&&=&/span& &span class=&n&&data&/span&&span class=&p&&[&/span&&span class=&n&&data&/span&&span class=&o&&.&/span&&span class=&n&&columns&/span&&span class=&o&&.&/span&&span class=&n&&values&/span&&span class=&p&&[&/span&&span class=&mi&&1&/span&&span class=&p&&:]]&/span&&span class=&o&&.&/span&&span class=&n&&values&/span&
&span class=&n&&labels&/span& &span class=&o&&=&/span& &span class=&n&&data&/span&&span class=&o&&.&/span&&span class=&n&&label&/span&&span class=&o&&.&/span&&span class=&n&&values&/span&
&span class=&n&&train_digits&/span&&span class=&p&&,&/span& &span class=&n&&test_digits&/span&&span class=&p&&,&/span& &span class=&n&&train_labels&/span&&span class=&p&&,&/span& &span class=&n&&test_labels&/span& &span class=&o&&=&/span& &span class=&n&&train_test_split&/span&&span class=&p&&(&/span&&span class=&n&&digits&/span&&span class=&p&&,&/span& &span class=&n&&labels&/span&&span class=&p&&)&/span&
&span class=&n&&train_labels_one_hot&/span& &span class=&o&&=&/span& &span class=&n&&to_categorical&/span&&span class=&p&&(&/span&&span class=&n&&train_labels&/span&&span class=&p&&)&/span&
&span class=&n&&test_labels_one_hot&/span& &span class=&o&&=&/span& &span class=&n&&to_categorical&/span&&span class=&p&&(&/span&&span class=&n&&test_labels&/span&&span class=&p&&)&/span&
&span class=&n&&model&/span& &span class=&o&&=&/span& &span class=&n&&Sequential&/span&&span class=&p&&()&/span&
&span class=&n&&model&/span&&span class=&o&&.&/span&&span class=&n&&add&/span&&span class=&p&&(&/span&&span class=&n&&Reshape&/span&&span class=&p&&(&/span&&span class=&n&&target_shape&/span&&span class=&o&&=&/span&&span class=&p&&(&/span&&span class=&mi&&1&/span&&span class=&p&&,&/span& &span class=&mi&&28&/span&&span class=&p&&,&/span& &span class=&mi&&28&/span&&span class=&p&&),&/span& &span class=&n&&input_shape&/span&&span class=&o&&=&/span&&span class=&p&&(&/span&&span class=&mi&&784&/span&&span class=&p&&,)))&/span&
&span class=&n&&model&/span&&span class=&o&&.&/span&&span class=&n&&add&/span&&span class=&p&&(&/span&
&span class=&n&&Convolution2D&/span&&span class=&p&&(&/span&&span class=&n&&nb_filter&/span&&span class=&o&&=&/span&&span class=&mi&&32&/span&&span class=&p&&,&/span& &span class=&n&&nb_row&/span&&span class=&o&&=&/span&&span class=&mi&&3&/span&&span class=&p&&,&/span& &span class=&n&&nb_col&/span&&span class=&o&&=&/span&&span class=&mi&&3&/span&&span class=&p&&,&/span& &span class=&n&&dim_ordering&/span&&span class=&o&&=&/span&&span class=&s2&&&th&&/span&&span class=&p&&,&/span& &span class=&n&&border_mode&/span&&span class=&o&&=&/span&&span class=&s2&&&same&&/span&&span class=&p&&,&/span& &span class=&n&&bias&/span&&span class=&o&&=&/span&&span class=&bp&&False&/span&&span class=&p&&,&/span& &span class=&n&&init&/span&&span class=&o&&=&/span&&span class=&s2&&&uniform&&/span&&span class=&p&&))&/span&
&span class=&n&&model&/span&&span class=&o&&.&/span&&span class=&n&&add&/span&&span class=&p&&(&/span&&span class=&n&&AveragePooling2D&/span&&span class=&p&&(&/span&&span class=&n&&pool_size&/span&&span class=&o&&=&/span&&span class=&p&&(&/span&&span class=&mi&&2&/span&&span class=&p&&,&/span& &span class=&mi&&2&/span&&span class=&p&&),&/span& &span class=&n&&dim_ordering&/span&&span class=&o&&=&/span&&span class=&s2&&&th&&/span&&span class=&p&&))&/span&
&span class=&n&&model&/span&&span class=&o&&.&/span&&span class=&n&&add&/span&&span class=&p&&(&/span&
&span class=&n&&Convolution2D&/span&&span class=&p&&(&/span&&span class=&n&&nb_filter&/span&&span class=&o&&=&/span&&span class=&mi&&64&/span&&span class=&p&&,&/span& &span class=&n&&nb_row&/span&&span class=&o&&=&/span&&span class=&mi&&3&/span&&span class=&p&&,&/span& &span class=&n&&nb_col&/span&&span class=&o&&=&/span&&span class=&mi&&3&/span&&span class=&p&&,&/span& &span class=&n&&dim_ordering&/span&&span class=&o&&=&/span&&span class=&s2&&&th&&/span&&span class=&p&&,&/span& &span class=&n&&border_mode&/span&&span class=&o&&=&/span&&span class=&s2&&&same&&/span&&span class=&p&&,&/span& &span class=&n&&bias&/span&&span class=&o&&=&/span&&span class=&bp&&False&/span&&span class=&p&&,&/span& &span class=&n&&init&/span&&span class=&o&&=&/span&&span class=&s2&&&uniform&&/span&&span class=&p&&))&/span&
&span class=&n&&model&/span&&span class=&o&&.&/span&&span class=&n&&add&/span&&span class=&p&&(&/span&&span class=&n&&AveragePooling2D&/span&&span class=&p&&(&/span&&span class=&n&&pool_size&/span&&span class=&o&&=&/span&&span class=&p&&(&/span&&span class=&mi&&2&/span&&span class=&p&&,&/span& &span class=&mi&&2&/span&&span class=&p&&),&/span& &span class=&n&&dim_ordering&/span&&span class=&o&&=&/span&&span class=&s2&&&th&&/span&&span class=&p&&))&/span&
&span class=&n&&model&/span&&span class=&o&&.&/span&&span class=&n&&add&/span&&span class=&p&&(&/span&&span class=&n&&Flatten&/span&&span class=&p&&())&/span&
&span class=&n&&model&/span&&span class=&o&&.&/span&&span class=&n&&add&/span&&span class=&p&&(&/span&&span class=&n&&Dense&/span&&span class=&p&&(&/span&&span class=&n&&output_dim&/span&&span class=&o&&=&/span&&span class=&mi&&1000&/span&&span class=&p&&,&/span& &span class=&n&&activation&/span&&span class=&o&&=&/span&&span class=&s2&&&sigmoid&&/span&&span class=&p&&))&/span&
&span class=&n&&model&/span&&span class=&o&&.&/span&&span class=&n&&add&/span&&span class=&p&&(&/span&&span class=&n&&Dense&/span&&span class=&p&&(&/span&&span class=&n&&output_dim&/span&&span class=&o&&=&/span&&span class=&mi&&1000&/span&&span class=&p&&,&/span& &span class=&n&&activation&/span&&span class=&o&&=&/span&&span class=&s2&&&sigmoid&&/span&&span class=&p&&))&/span&
&span class=&n&&model&/span&&span class=&o&&.&/span&&span class=&n&&add&/span&&span class=&p&&(&/span&&span class=&n&&Dense&/span&&span class=&p&&(&/span&&span class=&n&&output_dim&/span&&span class=&o&&=&/span&&span class=&mi&&10&/span&&span class=&p&&,&/span& &span class=&n&&activation&/span&&span class=&o&&=&/span&&span class=&s2&&&linear&&/span&&span class=&p&&))&/span&
&span class=&k&&with&/span& &span class=&nb&&open&/span&&span class=&p&&(&/span&&span class=&s2&&&digits_model.json&&/span&&span class=&p&&,&/span& &span class=&s2&&&w&&/span&&span class=&p&&)&/span& &span class=&k&&as&/span& &span class=&n&&f&/span&&span class=&p&&:&/span&
&span class=&n&&f&/span&&span class=&o&&.&/span&&span class=&n&&write&/span&&span class=&p&&(&/span&&span class=&n&&model&/span&&span class=&o&&.&/span&&span class=&n&&to_json&/span&&span class=&p&&())&/span&
&span class=&n&&plot&/span&&span class=&p&&(&/span&&span class=&n&&model&/span&&span class=&p&&,&/span& &span class=&n&&to_file&/span&&span class=&o&&=&/span&&span class=&s2&&&digits_model.png&&/span&&span class=&p&&,&/span& &span class=&n&&show_shapes&/span&&span class=&o&&=&/span&&span class=&bp&&True&/span&&span class=&p&&)&/span&
&span class=&n&&opt&/span& &span class=&o&&=&/span& &span class=&n&&SGD&/span&&span class=&p&&(&/span&&span class=&n&&lr&/span&&span class=&o&&=&/span&&span class=&mf&&0.01&/span&&span class=&p&&,&/span& &span class=&n&&decay&/span&&span class=&o&&=&/span&&span class=&mf&&1e-6&/span&&span class=&p&&,&/span& &span class=&n&&momentum&/span&&span class=&o&&=&/span&&span class=&mf&&0.9&/span&&span class=&p&&,&/span& &span class=&n&&nesterov&/span&&span class=&o&&=&/span&&span class=&bp&&True&/span&&span class=&p&&)&/span&
&span class=&n&&model&/span&&span class=&o&&.&/span&&span class=&n&&compile&/span&&span class=&p&&(&/span&&span class=&n&&loss&/span&&span class=&o&&=&/span&&span class=&s2&&&mse&&/span&&span class=&p&&,&/span& &span class=&n&&optimizer&/span&&span class=&o&&=&/span&&span class=&n&&opt&/span&&span class=&p&&,&/span& &span class=&n&&metrics&/span&&span class=&o&&=&/span&&span class=&p&&[&/span&&span class=&s2&&&accuracy&&/span&&span class=&p&&])&/span&
&span class=&n&&model&/span&&span class=&o&&.&/span&&span class=&n&&fit&/span&&span class=&p&&(&/span&&span class=&n&&train_digits&/span&&span class=&p&&,&/span& &span class=&n&&train_labels_one_hot&/span&&span class=&p&&,&/span& &span class=&n&&batch_size&/span&&span class=&o&&=&/span&&span class=&mi&&32&/span&&span class=&p&&,&/span& &span class=&n&&nb_epoch&/span&&span class=&o&&=&/span&&span class=&mi&&10&/span&&span class=&p&&,&/span& &span class=&n&&callbacks&/span&&span class=&o&&=&/span&&span class=&p&&[&/span&&span class=&n&&history&/span&&span class=&p&&])&/span&
&span class=&n&&model&/span&&span class=&o&&.&/span&&span class=&n&&save_weights&/span&&span class=&p&&(&/span&&span class=&s2&&&digits_model_weights.hdf5&&/span&&span class=&p&&)&/span&
&span class=&n&&predict_labels&/span& &span class=&o&&=&/span& &span class=&n&&model&/span&&span class=&o&&.&/span&&span class=&n&&predict_classes&/span&&span class=&p&&(&/span&&span class=&n&&test_digits&/span&&span class=&p&&)&/span&
&span class=&k&&print&/span&&span class=&p&&(&/span&&span class=&n&&classification_report&/span&&span class=&p&&(&/span&&span class=&n&&test_labels&/span&&span class=&p&&,&/span& &span class=&n&&predict_labels&/span&&span class=&p&&))&/span&
&span class=&k&&print&/span&&span class=&p&&(&/span&&span class=&n&&accuracy_score&/span&&span class=&p&&(&/span&&span class=&n&&test_labels&/span&&span class=&p&&,&/span& &span class=&n&&predict_labels&/span&&span class=&p&&))&/span&
&span class=&k&&print&/span&&span class=&p&&(&/span&&span class=&n&&confusion_matrix&/span&&span class=&p&&(&/span&&span class=&n&&test_labels&/span&&span class=&p&&,&/span& &span class=&n&&predict_labels&/span&&span class=&p&&))&/span&
&/code&&/pre&&/div&&p&&br&&/p&&p&&br&&/p&&p&用 lena 图训练 sobel 算子的代码:&/p&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&kn&&from&/span& &span class=&nn&&keras.models&/span& &span class=&kn&&import&/span& &span class=&n&&Sequential&/span&
&span class=&kn&&from&/span& &span class=&nn&&keras.layers&/span& &span class=&kn&&import&/span& &span class=&n&&Convolution2D&/span&
&span class=&kn&&from&/span& &span class=&nn&&keras.callbacks&/span& &span class=&kn&&import&/span& &span class=&n&&Callback&/span&
&span class=&kn&&from&/span& &span class=&nn&&PIL&/span& &span class=&kn&&import&/span& &span class=&n&&Image&/span&
&span class=&kn&&import&/span& &span class=&nn&&numpy&/span& &span class=&kn&&as&/span& &span class=&nn&&np&/span&
&span class=&kn&&from&/span& &span class=&nn&&scipy.ndimage.filters&/span& &span class=&kn&&import&/span& &span class=&n&&convolve&/span&
&span class=&k&&class&/span& &span class=&nc&&LossHistory&/span&&span class=&p&&(&/span&&span class=&n&&Callback&/span&&span class=&p&&):&/span&
&span class=&k&&def&/span& &span class=&nf&&__init__&/span&&span class=&p&&(&/span&&span class=&bp&&self&/span&&span class=&p&&):&/span&
&span class=&n&&Callback&/span&&span class=&o&&.&/span&&span class=&n&&__init__&/span&&span class=&p&&(&/span&&span class=&bp&&self&/span&&span class=&p&&)&/span&
&span class=&bp&&self&/span&&span class=&o&&.&/span&&span class=&n&&losses&/span& &span class=&o&&=&/span& &span class=&p&&[]&/span&
&span class=&k&&def&/span& &span class=&nf&&on_train_begin&/span&&span class=&p&&(&/span&&span class=&bp&&self&/span&&span class=&p&&,&/span& &span class=&n&&logs&/span&&span class=&o&&=&/span&&span class=&bp&&None&/span&&span class=&p&&):&/span&
&span class=&k&&pass&/span&
&span class=&k&&def&/span& &span class=&nf&&on_batch_end&/span&&span class=&p&&(&/span&&span class=&bp&&self&/span&&span class=&p&&,&/span& &span class=&n&&batch&/span&&span class=&p&&,&/span& &span class=&n&&logs&/span&&span class=&o&&=&/span&&span class=&bp&&None&/span&&span class=&p&&):&/span&
&span class=&bp&&self&/span&&span class=&o&&.&/span&&span class=&n&&losses&/span&&span class=&o&&.&/span&&span class=&n&&append&/span&&span class=&p&&(&/span&&span class=&n&&logs&/span&&span class=&o&&.&/span&&span class=&n&&get&/span&&span class=&p&&(&/span&&span class=&s1&&'loss'&/span&&span class=&p&&))&/span&
&span class=&n&&lena&/span& &span class=&o&&=&/span& &span class=&n&&np&/span&&span class=&o&&.&/span&&span class=&n&&array&/span&&span class=&p&&(&/span&&span class=&n&&Image&/span&&span class=&o&&.&/span&&span class=&n&&open&/span&&span class=&p&&(&/span&&span class=&s2&&&lena.png&&/span&&span class=&p&&)&/span&&span class=&o&&.&/span&&span class=&n&&convert&/span&&span class=&p&&(&/span&&span class=&s2&&&L&&/span&&span class=&p&&))&/span&
&span class=&n&&lena_sobel&/span& &span class=&o&&=&/span& &span class=&n&&np&/span&&span class=&o&&.&/span&&span class=&n&&zeros&/span&&span class=&p&&(&/span&&span class=&n&&lena&/span&&span class=&o&&.&/span&&span class=&n&&shape&/span&&span class=&p&&)&/span&
&span class=&c1&&# sobel 算子。&/span&
&span class=&n&&sobel&/span& &span class=&o&&=&/span& &span class=&n&&np&/span&&span class=&o&&.&/span&&span class=&n&&array&/span&&span class=&p&&([&/span&
&span class=&p&&[&/span&&span class=&o&&-&/span&&span class=&mi&&1&/span&&span class=&p&&,&/span& &span class=&mi&&0&/span&&span class=&p&&,&/span& &span class=&mi&&1&/span&&span class=&p&&],&/span&
&span class=&p&&[&/span&&span class=&o&&-&/span&&span class=&mi&&2&/span&&span class=&p&&,&/span& &span class=&mi&&0&/span&&span class=&p&&,&/span& &span class=&mi&&2&/span&&span class=&p&&],&/span&
&span class=&p&&[&/span&&span class=&o&&-&/span&&span class=&mi&&1&/span&&span class=&p&&,&/span& &span class=&mi&&0&/span&&span class=&p&&,&/span& &span class=&mi&&1&/span&&span class=&p&&]&/span&
&span class=&p&&])&/span&
&span class=&c1&&# 计算卷积:用 sobel 算子滤波。结果保存在 lena_sobel 中。&/span&
&span class=&n&&convolve&/span&&span class=&p&&(&/span&&span class=&nb&&input&/span&&span class=&o&&=&/span&&span class=&n&&lena&/span&&span class=&p&&,&/span& &span class=&n&&output&/span&&span class=&o&&=&/span&&span class=&n&&lena_sobel&/span&&span class=&p&&,&/span& &span class=&n&&weights&/span&&span class=&o&&=&/span&&span class=&n&&sobel&/span&&span class=&p&&,&/span& &span class=&n&&mode&/span&&span class=&o&&=&/span&&span class=&s2&&&constant&&/span&&span class=&p&&,&/span& &span class=&n&&cval&/span&&span class=&o&&=&/span&&span class=&mf&&1.0&/span&&span class=&p&&)&/span&
&span class=&c1&&# 将像素值调整到 [0,255] 区间并保存 sobel 算子滤波后的 lena 图。&/span&
&span class=&n&&lena_tmp&/span& &span class=&o&&=&/span& &span class=&n&&np&/span&&span class=&o&&.&/span&&span class=&n&&uint8&/span&&span class=&p&&((&/span&&span class=&n&&lena_sobel&/span& &span class=&o&&-&/span& &span class=&n&&lena_sobel&/span&&span class=&o&&.&/span&&span class=&n&&min&/span&&span class=&p&&())&/span& &span class=&o&&*&/span& &span class=&mi&&255&/span& &span class=&o&&/&/span& &span class=&p&&(&/span&&span class=&n&&lena_sobel&/span&&span class=&o&&.&/span&&span class=&n&&max&/span&&span class=&p&&()&/span& &span class=&o&&-&/span& &span class=&n&&lena_sobel&/span&&span class=&o&&.&/span&&span class=&n&&min&/span&&span class=&p&&()))&/span&
&span class=&n&&Image&/span&&span class=&o&&.&/span&&span class=&n&&fromarray&/span&&span class=&p&&(&/span&&span class=&n&&lena_tmp&/span&&span class=&p&&)&/span&&span class=&o&&.&/span&&span class=&n&&save&/span&&span class=&p&&(&/span&&span class=&s2&&&lena_sobel.png&&/span&&span class=&p&&)&/span&
&span class=&c1&&# 将原始 lena 图和 sobel 滤波 lena 图转换成 (1, 1, width, height) 尺寸。第一个 1 表示训练集只有一个样本。第二个 1 表示样本只有一个 channel 。&/span&
&span class=&n&&X&/span& &span class=&o&&=&/span& &span class=&n&&lena&/span&&span class=&o&&.&/span&&span class=&n&&reshape&/span&&span class=&p&&((&/span&&span class=&mi&&1&/span&&span class=&p&&,&/span& &span class=&mi&&1&/span&&span class=&p&&)&/span& &span class=&o&&+&/span& &span class=&n&&lena&/span&&span class=&o&&.&/span&&span class=&n&&shape&/span&&span class=&p&&)&/span&
&span class=&n&&Y&/span& &span class=&o&&=&/span& &span class=&n&&lena_sobel&/span&&span class=&o&&.&/span&&span class=&n&&reshape&/span&&span class=&p&&((&/span&&span class=&mi&&1&/span&&span class=&p&&,&/span& &span class=&mi&&1&/span&&span class=&p}

我要回帖

更多关于 竖式计算五年级上册 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信