This study examines the influence maximization (IM) problem via information cascades within random graphs, the topology of which dynamically changes due to the uncertainty of user behavior. This study leverages the discrete choice model (DCM) to calculate the probabilities of the existence of the directed arc between any two nodes. In this IM problem, the DCM provides a good description and prediction of user behavior in terms of following or not following a neighboring user. To find the maximal influence at the end of a finite-time horizon, this study models the IM problem by using multistage stochastic programming, which can help a decision-maker to select the optimal seed nodes by which to broadcast messages efficiently. Since computational complexity grows exponentially with network size and time horizon, the original model is not solvable within a reasonable time. This study then uses two different approaches by which to approximate the optimal decision: myopic two-stage stochastic programming and reinforcement learning via the Markov decision process. Computational experiments show that the reinforcement learning method outperforms the myopic two-stage stochastic programming method.