**Date Published:** March 27, 2018

**Publisher:** Springer International Publishing

**Author(s):** C. Maharajan, R. Raja, Jinde Cao, G. Ravi, G. Rajchakit.

http://doi.org/10.1186/s13662-018-1553-7

**Abstract**

**This paper concerns the problem of enhanced results on robust finite time passivity for uncertain discrete time Markovian jumping BAM delayed neural networks with leakage delay. By implementing a proper Lyapunov–Krasovskii functional candidate, reciprocally convex combination method, and linear matrix inequality technique, we derive several sufficient conditions for varying the passivity of discrete time BAM neural networks. Further, some sufficient conditions for finite time boundedness and passivity for uncertainties are proposed by employing zero inequalities. Finally, the enhancement of the feasible region of the proposed criteria is shown via numerical examples with simulation to illustrate the applicability and usefulness of the proposed method.**

**Partial Text**

There has been a growing research interest in the field of recurrent neural networks (RNNs) largely studied by many researchers in recent years. The network architecture includes various types of neural networks such as bidirectional associative memory (BAM) neural networks, Hopfield neural networks, cellular neural networks, Cohen–Grossberg neural networks, neural and social networks which have received great attention due to their wide applications in the field of classification, signal and image processing, parallel computing, associate memories, optimization, cryptography, and so on. The bidirectional associative memory (BAM) neural network models were initially coined by Kosko, see [1, 2]. This network has an extraordinary class of RNNs which can have the ability to store bipolar vector pairs. It is composed of neurons and is arranged in two layers, one is the X-layer and the other is the Y-layer. The neurons in one layer are fully interconnected to the neurons in the other layer. The BAM neural networks are designed in such a way that, for a given external input, they can reveal only one global asymptotic or exponential stability equilibrium point. Hence, considerable efforts have been made in the study of stability analysis of neural networks, and as a credit to this, a large number of sufficient conditions have been proposed to guarantee the global asymptotic or exponential stability for the addressed neural networks.

Now consider the following BAM neural networks with stochastic noise disturbance, Markovian jump parameters, leakage and mixed time delays, which are in the uncertainty case system: 28documentclass[12pt]{minimal}

usepackage{amsmath}

usepackage{wasysym}

usepackage{amsfonts}

usepackage{amssymb}

usepackage{amsbsy}

usepackage{mathrsfs}

usepackage{upgreek}

setlength{oddsidemargin}{-69pt}

begin{document} $$begin{aligned} begin{aligned} &du(t) = biggl[-bigl(C+Delta C(t)bigr) bigl(r(t) bigr)u(t-nu_{1}) \ &hphantom{du(t) ={}}{}+bigl(W_{0}+Delta W_{0}(t) bigr) bigl(r(t)bigr)bar{f}bigl(v(t)bigr) \ &hphantom{du(t) ={}}{}+bigl(W_{1}+W_{1}(t) bigr) bigl(r(t)bigr)bigl(bar{g}bigl(vbigl(t-tau_{1}(t)bigr) bigr)bigr) \ &hphantom{du(t) ={}}{}+bigl(W_{2}+Delta W_{2}(t)bigr) bigl(r(t) bigr) int_{t-sigma_{1}}^{t}bar{h}bigl(v(s)bigr),dsbiggr],dt \ &hphantom{du(t) ={}}{}+ bar{rho _{1}}bigl(u(t-nu_{1}),v(t),vbigl(t-tau_{1}(t)bigr),t,r(t)bigr),dbar{ omega}(t), quad t>0, t neq t_{k}, \ &Delta u(t_{k}) = bigl(bar{M}_{k}+Deltabar {M}_{k}(t)bigr) bigl(r(t)bigr) bigl(ubigl(t_{k}^{-} bigr),u_{t_{k}^{-}}bigr),quad t=t_{k}, kin mathbb {Z}_{+}, \ &dv(t) = biggl[-bigl(D+Delta D(t)bigr) bigl(tilde{r}(t)bigr)v(t- nu_{2}) \ &hphantom{dv(t) = {}}{}+bigl(V_{0}+Delta V_{0}(t)bigr) bigl( tilde{r}(t)bigr)bar{tilde{f}}bigl(u(t)bigr) \ &hphantom{dv(t) = {}}{}+bigl(V_{1}+Delta V_{1}(t)bigr) bigl(tilde{r}(t)bigr)bar{tilde{g}}bigl(ubigl(t-tau _{2}(t) bigr)bigr) \ &hphantom{dv(t) = {}}{}+bigl(V_{2}+Delta V_{2}(t)bigr) bigl( tilde{r}(t)bigr) int_{t-sigma_{2}}^{t}bar{tilde {h}}bigl(u(s)bigr),ds biggr],dt \ &hphantom{dv(t) = {}}{}+bar{rho_{2}}bigl(v(t-nu_{2}),u(t),ubigl(t-tau_{2}(t)bigr),t,tilde{r}(t)bigr) ,d bar{tilde {omega }}(t), quad t>0, t neq t_{k}, \ &Delta v(t_{k}) = bigl(bar{N}_{k}+Delta bar{N}_{k}(t)bigr) bigl(tilde {r}(t)bigr) bigl(v(t_{k^{-}}),v_{t_{k^{-}}} bigr), quad t = t_{k}, kinmathbb {Z}_{+}. end{aligned} end{aligned}$$ end{document}du(t)=[−(C+ΔC(t))(r(t))u(t−ν1)du(t)=+(W0+ΔW0(t))(r(t))f¯(v(t))du(t)=+(W1+W1(t))(r(t))(g¯(v(t−τ1(t))))du(t)=+(W2+ΔW2(t))(r(t))∫t−σ1th¯(v(s))ds]dtdu(t)=+ρ1¯(u(t−ν1),v(t),v(t−τ1(t)),t,r(t))dω¯(t),t>0,t≠tk,Δu(tk)=(M¯k+ΔM¯k(t))(r(t))(u(tk−),utk−),t=tk,k∈Z+,dv(t)=[−(D+ΔD(t))(r˜(t))v(t−ν2)dv(t)=+(V0+ΔV0(t))(r˜(t))f˜¯(u(t))dv(t)=+(V1+ΔV1(t))(r˜(t))g˜¯(u(t−τ2(t)))dv(t)=+(V2+ΔV2(t))(r˜(t))∫t−σ2th˜¯(u(s))ds]dtdv(t)=+ρ2¯(v(t−ν2),u(t),u(t−τ2(t)),t,r˜(t))dω˜¯(t),t>0,t≠tk,Δv(tk)=(N¯k+ΔN¯k(t))(r˜(t))(v(tk−),vtk−),t=tk,k∈Z+.

In this section, we provide two numerical examples with their simulations to demonstrate the effectiveness of our results.

In this paper, we have treated the problem of global exponential stability analysis for the leakage delay terms. By employing the Lyapunov stability theory and the LMI framework, we have attained a new sufficient condition to justify the global exponential stability of stochastic impulsive uncertain BAMNNs with two kinds of time-varying delays and leakage delays. The advantage of this paper is that different types of uncertain parameters were introduced into the Lyapunov–Krasovskii functionals, and the exponential stability behavior was studied. Additionally, two numerical examples have been provided to reveal the usefulness of our obtained deterministic and uncertain results. To the best of our knowledge, there are no results on the exponential stability analysis of inertial-type BAM neural networks with both time-varying delays by using Wirtinger based inequality, which might be our future research work.

Source:

http://doi.org/10.1186/s13662-018-1553-7