Many complex actions are mentally pre-composed as plans that specify orderings of simpler actions. To be executed accurately, planned orderings must become active in working memory, and then enacted one-by-one until the sequence is complete. Examples include writing, typing, and speaking. In cases where the planned complex action is musical in nature (e.g. a choreographed dance or a piano melody), it appears to be possible to deploy two learned sequences at the same time, one composed from actions and a second composed from the time intervals between actions. Despite this added complexity, humans readily learn and perform rhythm-based action sequences. Notably, people can learn action sequences and rhythmic sequences separately, and then combine them with little trouble (Ullén & Bengtsson 2003). Related functional MRI data suggest that there are distinct neural regions responsible for the two different sequence types (Bengtsson et al. 2004). Although research on musical rhythm is extensive, few computational models exist to extend and inform our understanding of its neural bases. To that end, this article introduces the TAMSIN (Timing And Motor System Integration Network) model, a systems-level neural network model capable of performing arbitrary item sequences in accord with any rhythmic pattern that can be represented as a sequence of integer multiples of a base interval. In TAMSIN, two Competitive Queuing (CQ) modules operate in parallel. One represents and controls item order (the ORD module) and the second represents and controls the sequence of inter-onset-intervals (IOIs) that define a rhythmic pattern (RHY module). Further circuitry helps these modules coordinate their signal processing to enable performative output consistent with a desired beat and tempo.