It is common practice to use Laplace approximations to decrease the computational burden when computing the marginal likelihoods in Bayesian versions of generalised linear models (GLM). Marginal likelihoods combined with model priors are then used in different search algorithms to compute the posterior marginal probabilities of models and individual covariates. This allows performing Bayesian model selection and model averaging. For large sample sizes, even the Laplace approximation becomes computationally challenging because the optimisation routine involved needs to evaluate the likelihood on the full dataset in multiple iterations. As a consequence, the algorithm is not scalable for large datasets. To address this problem, we suggest using stochastic optimisation approaches, which only use a subsample of the data for each iteration. We combine stochastic optimisation with Markov chain Monte Carlo (MCMC) based methods for Bayesian model selection and provide some theoretical results on the convergence of the estimates for the resulting time-inhomogeneous MCMC. Finally, we report results from experiments illustrating the performance of the proposed algorithm.