One Dimensional Neural Time Series Generation
Smith, Kaleb Earl
MetadataShow full item record
Time dependent data is a main source of information in today’s data driven world.Generating this type of data though has shown its challenges and made it an interesting research area in the field of generative machine learning. The challenge with this one-dimensional (1D) data has been for applications in machine learning to gain access to a considerable amount of quality data needed for algorithm development and analysis. Modeling synthetic data using a Generative Adversarial Network (GAN) has been at the heart of providing a viable solution. Our work focuses on one dimensional times series and explores the “few shot” generation approach, which is the ability of an algorithm to perform well with limited data. This work proposes the first of its kind, few shot generation model called Time Series GAN (TSGAN) which looks to harden its time series generation by learning the conditional input with spectral characteristics. TSGAN is comprised of two sub-networks which work on two aspects of generation: first generating two-dimensional (2D) representations of the signals and second, using those 2Dgenerations to generate 1D signals. TSGAN shows impressive results in the time series generation community and is tested on a large open source data set covering multitudes of sensor collection types. TSGAN outperforms its competitors in generation on community published metrics describing quality and usability of the synthetic data. Extending TSGAN, we propose unified TSGAN (uTSGAN),a method which unifies the loss functions to cohesively blend the learning dependency of the two sub-networks. uTSGAN shows a quality increase generating realistic time series data and does so in less training time than its predecessor on the same experimental data, outperforming TSGAN in over 80% of the data sets.Lastly, we introduce our favored method for signal generation, penalized-uTSGAN. A derivative of uTSGAN which looks to enhance signal generation by penalizing the 2D conditional information learned from the first sub-network.