Self-Attentive Hawkes Process

Part of Proceedings of the International Conference on Machine Learning 1 pre-proceedings (ICML 2020)

Bibtex »Metadata »Paper »Supplemental »

Bibtek download is not availble in the pre-proceeding


Qiang Zhang, Aldo Lipani, Omer Kirnap, Emine Yilmaz


<p>Capturing the occurrence dynamics is crucial to predicting which type of events will happen next and when. A common method to do this is Hawkes processes. To enhance their capacity, recurrent neural networks (RNNs) have been incorporated due to RNNs’ successes in processing sequential data such as languages. Recent evidence suggests self-attention is more competent than RNNs in dealing with languages. However, we are unaware of the effectiveness of selfattention in the context of Hawkes processes. This study attempts to fill the gap by designing a self-attentive Hawkes process (SAHP). The SAHP employed self-attention to summarize influence from history events and compute the probability of the next event. One deficit of the conventional self-attention is that position embeddings only considered order numbers in a sequence, which ignored time intervals between temporal events. To overcome this deficit, we modified the conventional method by translating time intervals into phase shifts of sinusoidal functions. Experiments on goodness-of-fit and prediction tasks showed the improved capability of SAHP. Furthermore, the SAHP is more interpretable than RNN-based counterparts because the learnt attention weights revealed contributions of one event type to the happening of another type. To the best of our knowledge, this is the first work that studies the effectiveness of self-attention in Hawkes processes</p>