In this paper, we study the following robust optimization problem. Given an independence system and candidate objective functions, we choose an independent set, and then an adversary chooses one objective function, knowing our choice. The goal is to find a randomized strategy (i.e., a probability distribution over the independent sets) that maximizes the expected objective value in the worst case. This problem is fundamental in wide areas such as artificial intelligence, machine learning, game theory and optimization. To solve the problem, we propose two types of schemes for designing approximation algorithms. One scheme is for the case when objective functions are linear. It first finds an approximately optimal aggregated strategy and then retrieves a desired solution with little loss of the objective value. The approximation ratio depends on a relaxation of an independence system polytope. As applications, we provide approximation algorithms for a knapsack constraint or a matroid intersection by developing appropriate relaxations and retrievals. The other scheme is based on the multiplicative weights update (MWU) method. The direct application of the MWU method does not yield a strict multiplicative approximation algorithm but yield one with an additional additive error term. A key technique to overcome the issue is to introduce a new concept called (η,γ)-reductions for objective functions with parameters η and γ. We show that our scheme outputs a nearly α-approximate solution if there exists an α-approximation algorithm for a subproblem defined by (η,γ)-reductions. This improves approximation ratios in previous results. Using our result, we provide approximation algorithms when the objective functions are submodular or correspond to the cardinality robustness for the knapsack problem.