神經網絡課5-1教案n_第1頁
神經網絡課5-1教案n_第2頁
神經網絡課5-1教案n_第3頁
神經網絡課5-1教案n_第4頁
神經網絡課5-1教案n_第5頁
已閱讀5頁,還剩32頁未讀 繼續免費閱讀

下載本文檔

版權說明:本文檔由用戶提供并上傳,收益歸屬內容提供方,若內容存在侵權,請進行舉報或認領

文檔簡介

1、Chapter 5 Feedback Neural Network5.1 Hopfield neural network5.2 BAM network5.1 Hopfield neural networkFeedback network is a nonlinear dynamic system.In 1982, Hopfield J J brought forward a single layer feedback network composed of nonlinear component. Nonlinear difference equation Discrete HNN (DHNN

2、) differential equation Continuous HNN (CHNN)Applications: associative memory or classification computing for optimization5.1.1 Discrete Hopfield Neural Networkw12w1nw21wn, 1N1N2Nn-1Nnw2nx1x2xn-1xnwn-1, nwn-1, 1wn, 2wn-1, 2v1(t)v2(t)vn(t)y1y2yn-1ynA. Hopfield network architecture Hopfield network ar

3、chitecture 2z-1z-1z-1w11w1nwn1wnn1-2nu2(t)u1(t)un(t)x1(t+1)x2(t+1)xn(t+1)x1(t)x2(t)xn(t)Computing formula:0101)sgn(,.,2 , 1)(sgn) 1()()(1xxxnitutxtxwtuiinjijijiB. Working principle1. Asynchronous(series) patternxi(t+1)=sgnui(t) xj(t+1)=xi(t) ( ji )2. Synchronous(parallel) patternxi(t+1)=sgnui(t) ( i

4、=1,2,n )3. Network stabilitynitxwtxtxnjiiijii,.,2 , 1)(sgn)() 1(13. Stability analysisUsing “energy function”jniininjjiijxxxwE11121TTXWXXE21Matrix format:x=1 or 1, w and are limited constant, so the energy function is limited constant.niininjijiinijininjijwxxxwE1111112121If E=E(t+1)-E(t) 0 then the

5、network converges to a steady status.Assumption: series pattern, wij=wji , wkk0E=E(t+1)-E(t)xk=xk(t+1)-xk(t) only kth node is enabledxk()=11)(sgn, 1)(1)(sgn, 1)() 1()(sgn)(220tutxtutxtxtutxxkkkkkkkkkkkkknjjkjkniiikkxxwtxwxtxwxE211)()()(21wij=wji221)(21)()(21)(kkkkkkkknjkjkjkxwtuxxwtxwxE0)() 1(sgn)()

6、(sgn)() 1(sgn)(sgn)()() 1()(tututututututututxtxtuxkkkkkkkkkkkkwkk0 E0The energy function is limited constant, therefore the network can converge to a local minimum point.5.1.2 Associative MemoryA. Auto-association MemoryTwo special points: 1. Content-addressable memory2. Information is distributed

7、over a field of nodes.The M samples (xi, i=1,2,M ) are stored in the network through learning process.If input x=x+v, x is one of the M samples v is bias itemThen output Y= xB. Hetero-association MemoryThe relationship between two samples:Xi yi , i=1,2,MFor example, xi stand for a mans picture, yi s

8、tand for his name. If input x=x+v then output Y=yNetwork computing phases:Learning : forming W(correspond to x1, x2, xp)Initial: v(0)=x (assuming input x=xk)Running: v(t+1)=sgn(v(t)W)Steady output: v=xkC. Learning rulex1x2xny1y2ynTransfer function:f(z)=sgn(z)Outer product rule:)(1IXXWmkTkKI is nn un

9、it matrix, wii=0TknkkkxxxX,.,21k=1,2,mthat is xik=1, i=1,2,n10101)()()(1212111212211211121121111mxxxxxxxxxxxxxxxmIxxxxWmkknmkkknmkkknmkknkmkkmkkkmkknkmkkkmkkknkmkknk: :mxxxmkknmkkmkk12122121)()()(00012111211211121mkkknmkkknmkknkmkkkmkknkmkkkxxxxxxxxxxxxWjijixxwnjiwwwmkkjkiijiijiij0),.,2 , 1,(0,1:Hop

10、field network association and recall algorithm* Learning phase1. Give m samples xk =x1k, x2k,xnk, k=1,2,m set W=0, k=0, begin to iteration2. Calculate wij(k+1)=wij(k)+xik+1 xjk+1 , k=1,2,m3. k=k+1 if the number of iteration less than M then goto 2 else end.* Association and recall phase1. input vect

11、or x1, xi(0)=xi , 1 i N2. Iteration njtxwftxNiiijjj,.,2 , 1)() 1(13. If not convergnce goto 1Example 1x= 1 -1 1 TWeight factors matrix (outer product rule):011101110IXXWT111)222()111011101110()(ffWXfyy= 1 -1 1 TNetwork associative function taking into account the input with aberrance(1) x1= 0 -1 1 T

12、(2) x2= 0 0 1 T(3) x3= 0 0 0 TTTfWXfy111)112()(11TTfWXfy111)011()(22TTfWXfy111)000()(33Example 211111111)2()1(xx02222022220222202)2()2()1()1(IxxxxWTTRecall and check)1()1(11116666)(xfWxf)2()2(11116666)(xfWxfAssociation memory functionTxx1111)0()3(Using asynchronous pattern, the sequence is 1,2,3,41)

13、6()0() 1 (111fxwfxnjjj1)0() 1 (1)0() 1 (1)0() 1 (443322xxxxxx)1(1111) 1 (xxTTxx1111)0()4(1)6()0() 1 (111fxwfxnjjj1)0() 1 (1)0() 1 (1)0() 1 (443322xxxxxx)2(1111) 1 (xxTUsing asynchronous pattern, the sequence is 1,2,3,4Only one step converge to x(1)Only one step converge to x(2)Txx1111)0()5(The itera

14、tion sequence is 1,2,3,4.1)2()0() 1 (111fxwfxnjjjTiixisthatixx1111) 1 (4 , 3 , 2)0() 1 (Not converge to x(1) or x(2)Step 21)6() 1 ()2(122fxwfxnjjj4 , 3 , 1) 1 ()2(ixxii)2(1111)2(xxTConverge to x(2)Step 1The iteration sequence is 3,4,1,24 , 2 , 1)0() 1 (1)2()0() 1 (3133ixxfxwfxinjjjStep 1x(1)=1 1 1 -

15、1T not converge to input vectorStep 23 , 2 , 1) 1 ()2(1)6() 1 ()2(144ixxfxwfxiinjjjx(2)=1 1 1 1T=x(1) converge to x(1)Synchronous computingTxx1111)0()3(11112226)()0() 1 ()3(fWxfWxfx11116666) 1 ()2(fWxfxConverge to x(1)Txx1111)0()4(11112226)()0() 1 ()4(fWxfWxfx11116666) 1 ()2(fWxfxConverge to x(2)Txx

16、1111)0()5(11112222)()0() 1 ()5(fWxfWxfx11112222) 1 ()2(fWxfxOscillate between two status每個模式為每個模式為12*10大小的黑白圖象,大小的黑白圖象,120個像素,個像素,+1表示像素為黑,表示像素為黑,-1為白。為白。網絡規模網絡規模N=120,連接數為,連接數為N2-N=12280個權。個權。先用外積法學習權值,然后用基本向量作為輸入。經試驗,所有基本向量都是網絡的穩定狀態。為檢查其糾錯能力,對基本向量加噪聲污染,方法是每一像素都以0.25的概率翻轉(-1改為1或反之)。測試結果如下。污染的610次迭代

17、后 20次迭代后 30次迭代后 37次迭代后模式: 0 1 2 3 4 6迭代次數: 34 32 26 37 25 375.1.3 Continuous Hopfield Neural NetworkHopfield dynamic neural modeli=Ri1Ri2RinuiciRi0viviv2vnIiIi external input signal)tanh()(11)(iiiiuiiiuuveuviCHNN Model1=wi1wi2winu1c1R10v1I12=w21w22w2nu2c2R20v2I2n=wn1wn2wnnuncnRn0vnIn+)(11iiiiiiiijij

18、njijijiiiuvcIcRwvwudtdunjijijiiiinjijinjijjiiIvwRuIuRRRvdtduc110111njiijijiiiiIuvRRudtduc10)(1If dui/dt=0 then u=Wv+. The formula is similar to the one of DHNN.Definition of Energy FunctionninjniniviiiijiijdvvRIvvvwEi111101)(121If the amplification is bigger enough, the third item can be ignored. Th

19、e expression of energy function is the same as one of the DHNN.ninjniiijiijIvvvwE111215.1.4 Computing for optimization應用應用HopfieldHopfield網絡優化計算步驟:網絡優化計算步驟:1. 對于待定問題,選擇一種合適的表示方法,使神經元網絡的輸出與問題的解對應起來。2. 構造神經元網絡能量函數,使其最小值對應于問題的最佳解。3. 由能量函數推出神經元網絡結構。4. 運行網絡,穩定狀態就是在一定條件下問題的解答。Example 1: TSPTraveling Sales

20、man Problem有一旅行商從某一城市出發,訪問各城市一次,且僅一次后再回到原出發城市,要求找出一條最短的巡回路線。N cities : C=c1,c2,cNdij : distance between ci and cj設 N=5, 即A、B、C、D、E分別代表5個城市。任選一條路徑:BDEAC B則其總長:S=dBD+dDE+dEA+dAC+dCB 次序城市12345A00010B10000C00001D01000E00100換位矩陣換位矩陣(Permutation Matrix)第一步:第一步:將此問題映照到一個神經網絡。設每個神經元放大器有很高的放大倍數,神經元輸出為0、1二值,如

21、換位矩陣所示。行表示城市,列表示巡回次序矩陣的每個元素為一個神經元,即N2=25個神經元組成 Hopfield網絡。第二步:第二步:把問題的目標函數轉化為能量函數,并將問題的變量對應于網絡的狀態。1. 一個城市只能被訪問一次,即換位矩陣每行只有一個“1”。* 即第x行的所有元素vxi按順序兩兩相乘之和應為0。0111 NiNijxjxivv* N行的所有元素按順序兩兩相乘之和也應為0。 NxNiNijxjxivv11110將此式乘上加權系數就為網絡能量函數的第一項。 NxNiNijxjxivvA1111022. 一次只能訪問一個城市,即換位矩陣每列只有一個“1”。與上同理,得能量函數的第二項: NiNxNxyyixivvB1111023. 總共有N個城市,即換位矩陣元素之和為N。NxNixiNv1102112NxNixiNvC得能量函數得第三項:4. 要求巡回路徑最短,即能量函數的最小值對應于TSP的最短路徑。設任意兩個城市x、y之間的距離為dxy , 訪問此二城市有兩種途徑:xy 和 yx1,1,iyxixyiyxixyvvdvvd和順序訪問x、y兩城市的所有途徑(長度)為:NiiyiyxixyNiiyxi

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯系上傳者。文件的所有權益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網頁內容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
  • 4. 未經權益所有人同意不得將文件中的內容挪作商業或盈利用途。
  • 5. 人人文庫網僅提供信息存儲空間,僅對用戶上傳內容的表現方式做保護處理,對用戶上傳分享的文檔內容本身不做任何修改或編輯,并不能對任何下載內容負責。
  • 6. 下載文件中如有侵權或不適當內容,請與我們聯系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

最新文檔

評論

0/150

提交評論