Computer Sciences & Cybersecurity
http://hdl.handle.net/11141/18
See also: School of Computing [current]
20190526T00:09:37Z

The effect of recency to human mobility
http://hdl.handle.net/11141/2172
The effect of recency to human mobility
Barbosa, H.; de LimaNeto, F.B; Evsukoff, A; Menezes, R.
In recent years, we have seen scientists attempt to model and explain human dynamics and in particular human movement. Many aspects of our complex life are affected by human movement such as disease spread and epidemics modeling, city planning, wireless network development, and disaster relief, to name a few. Given the myriad of applications, it is clear that a complete understanding of how people move in space can lead to considerable benefits to our society. In most of the recent works, scientists have focused on the idea that people movements are biased towards frequentlyvisited locations. According to them, human movement is based on a exploration/exploitation dichotomy in which individuals choose new locations (exploration) or return to frequentlyvisited locations (exploitation). In this work we focus on the concept of recency. We propose a model in which exploitation in human movement also considers recentlyvisited locations and not solely frequentlyvisited locations. We test our hypothesis against different empirical data of human mobility and show that our proposed model replicates the characteristic patterns of the recency bias. © 2015, Barbosa et al.
human mobility, mobility data analysis, regularities in human dynamics
20151201T00:00:00Z

Robust linear quadratic regulation using neural network
http://hdl.handle.net/11141/1791
Robust linear quadratic regulation using neural network
Yoo, Kisuck; Thursby, Michael H.
Using an Artificial Neural Network (ANN) trained with the Least Mean Square (LMS) algorithm we have designed a robust linear quadratic regulator for a range of plant uncertainty. Since there is a tradeoff between performance and robustness in the conventional design techniques, we propose a design technique to provide the best mix of robustness and performance. Our approach is to provide different control strategies for different levels of uncertainty. We describe how to measure these uncertainties. We will compare our multiple strategies results with those of conventional techniques e.g. H∞ control theory. A Lyapunov equation is used to define stability in all cases.
Algorithms, Least squares approximations, Lyapunov methods, Neural networks, Robustness (control systems)
19930722T00:00:00Z

Partial leastsquares regression neural network (PLSNET) with supervised adaptive modular learning
http://hdl.handle.net/11141/1786
Partial leastsquares regression neural network (PLSNET) with supervised adaptive modular learning
Ham, Fredric; Kostanic, Ivica N.
We present in this paper an adaptive linear neural network architecture called PLSNET. This network is based on partial leastsquares (PLS) regression. The architecture is a modular network with stages that are associated with the desired number of PLS factors that are to be retained. PLSNET actually consists of two separate but coupled architectures, PLSNETC for PLS calibration, and PLSNETP for prediction (or estimation). We show that PLSNETC can be trained by supervised learning with three standard Hebbian learning rules that extracts the PLS weight loading vectors, the regression coefficients, and the loading vectors for the univariate output component case (single target values). The PLS information that is extracted by PLSNETC after training, i.e., three sets of synaptic weights, is used by the PLSNETP as fixed weights (through the coupling) in its architecture. PLSNETC can then yield predictions of the output variable given test measurements as its input. Two examples are presented, the first illustrates the typical improved predictive capability of PLSNET compared to classical leastsquares, and the second shows how PLSNET can be used for parametric system identification.
Artificial neural networks, Hebbian learning rules, Partial leastsquares regression
19960322T00:00:00Z

Determination of adaptively adjusted coefficients for Hopfield neural networks utilizing the energy function
http://hdl.handle.net/11141/1785
Determination of adaptively adjusted coefficients for Hopfield neural networks utilizing the energy function
Park, Chiyeon; Fausett, Donald W.
With its potential for parallel computation and general applicability, the Hopfield neural network has been investigated and improved by many researchers in order to extend its usefulness to various combinatorial problems. In spite of its success in several applications within different energy function formulations, determination of the energy coefficients has been based primarily on trial and error methods since no practical and systematic way of finding good values has been available preciously, although some theoretical analyses have been presented. In this paper, we present a methodical procedure which adaptively determines the energy coefficients leading to a valid solution as the network evolves. This method directly utilizes the value of each competing term in the energy function to balance the coefficients at each stage of the computation of the network. The advantage of this method is that the system itself controls the amount of energy which each term contributes to the total energy. To demonstrate the effectiveness of this approach, the NQueens problem (a well known example of a constraint satisfaction problem) is studied and verified. Also, an inexpensive method for computation of the new energy level of each term at each stage of iteration is described based on incremental updating.
Artificial neural networks, Energy functions, Hopfield neural networks, NQueen's problem
19960322T00:00:00Z