In this paper, we consider the maximizing of the probability $${\mathbb {P}}\left\{ \, \zeta \, \mid \, \zeta \, \in \, {\mathbf {K}}({\mathbf {x}}) \, \right\} $$ over a closed and convex set $${\mathcal {X}}$$ , a special case of the chance-constrained optimization problem. Suppose $${\mathbf {K}}({\mathbf {x}}) \, \triangleq \, \left\{ \, \zeta \, \in \, {\mathcal {K}}\, \mid \, c({\mathbf {x}},\zeta ) \, \ge \, 0 \right\} $$ , and $$\zeta $$ is uniformly distributed on a convex and compact set $${\mathcal {K}}$$ and $$c({\mathbf {x}},\zeta )$$ is defined as either $$c({\mathbf {x}},\zeta )\, \triangleq \, 1-\left| \zeta ^T{\mathbf {x}}\right| ^m$$ where $$m\ge 0$$ (Setting A) or $$c({\mathbf {x}},\zeta ) \, \triangleq \, T{\mathbf {x}}\, - \, \zeta $$ (Setting B). We show that in either setting, by leveraging recent findings in the context of non-Gaussian integrals of positively homogenous functions, $${\mathbb {P}}\left\{ \,\zeta \, \mid \, \zeta \, \in \, {\mathbf {K}}({\mathbf {x}}) \, \right\} $$ can be expressed as the expectation of a suitably defined continuous function $$F(\bullet ,\xi )$$ with respect to an appropriately defined Gaussian density (or its variant), i.e. $${\mathbb {E}}_{{{\tilde{p}}}} \left[ \, F({\mathbf {x}},\xi )\, \right] $$ . Aided by a recent observation in convex analysis, we then develop a convex representation of the original problem requiring the minimization of $$g\left( {\mathbb {E}}\left[ \, F(\bullet ,\xi )\, \right] \right) $$ over $${\mathcal {X}}$$ , where g is an appropriately defined smooth convex function. Traditional stochastic approximation schemes cannot contend with the minimization of $$g\left( {\mathbb {E}}\left[ F(\bullet ,\xi )\right] \right) $$ over $$\mathcal X$$ , since conditionally unbiased sampled gradients are unavailable. We then develop a regularized variance-reduced stochastic approximation (r-VRSA) scheme that obviates the need for such unbiasedness by combining iterative regularization with variance-reduction. Notably, (r-VRSA) is characterized by almost-sure convergence guarantees, a convergence rate of $$\mathcal {O}(1/k^{1/2-a})$$ in expected sub-optimality where $$a > 0$$ , and a sample complexity of $$\mathcal {O}(1/\epsilon ^{6+\delta })$$ where $$\delta > 0$$ . To the best of our knowledge, this may be the first such scheme for probability maximization problems with convergence and rate guarantees. Preliminary numerics on a portfolio selection problem (Setting A) and a set-covering problem (Setting B) suggest that the scheme competes well with naive mini-batch SA schemes as well as integer programming approximation methods.