The imperative programming paradigm is the main one for creating sequential and parallel programs for the vast majority of modern computers, including supercomputers. A feature of the imperative paradigm is the sequence of commands. This feature is an obstacle to the creation of efficient parallel programs, since parallelism is achieved at the expense of additional code. One of the solutions to the problem of overhead for parallel computing is the creation of such a computing model and the architecture of the system that implements it, for which the parallel execution of the algorithm is an immanent property. This model is a dataflow computing model with a dynamically formed context and the architecture of the parallel dataflow computing system "Buran". A complete transition to dataflow systems is hampered, among other things, by the conceptual difference between the dataflow programming paradigm and the imperative one. The article compares these two paradigms. First, parallel data processing is an inherent property of a dataflow program. Second, the dataflow program consists of three elements: a set of initial data, a program code, and a parameterizable distribution function. And third, a conceptually different approach to the algorithmization of the task — the data themselves store information about who should process them (in traditional programs, on the contrary, the command stores information about what data should be processed). The article also presents the structure of a dataflow program and the route for creating a dataflow algorithm. The translation of basic algorithmic constructions (following, branching, loops) is considered on the example of simple problems.