Abstract

AbstractAsynchronous event sequences widely exist in the real world, such as social networks, electronic medical records, financial data, and genome analysis. For modeling asynchronous event sequences in the continuous time domain, point process has become the underpinning. In the initial research stage, Hawkes process is widely used because it can capture the self‐triggering and mutual triggering modes between different events in a variety of point process functions. In recent years, due to the development of neural networks, deep point process (also known as neural point process) can learn models with the stronger fitting ability and reduce the dependence on prior knowledge by using the powerful capacity of neural networks. The proposal of the transformer Hawkes process (THP) has led to a huge performance improvement, so a new climax of the transformer‐based deep Hawkes process is set off. However, THP does not make full use of the event and temporal information underlying the asynchronous event sequence, meanwhile, if we simply take the event type encoding and temporal encoding as the sequence encoding, a single transformer may suffer from learning bias. In order to circumvent these problems, we propose a tri‐transformer Hawkes process model (TTHP), in which the event and temporal information are introduced to the dot‐product attention operations as auxiliary information to form different multihead attention, respectively, and are utilized to build three heterogeneous learners. A series of well‐designed experiments on synthetic and real‐world datasets validate the effectiveness of the proposed TTHP.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call