For many years, data quality is among the most commonly discussed issue in Linked Open Data (LOD) due to the huge volume of integrated datasets that are usually heterogeneous. Several ontology-based approaches dealing with quality problems have been proposed. However, when datasets lack a well-defined schema, these approaches become ineffective because of the lack of metadata. Moreover, the detection of quality problems based on an analysis between RDF (Resource Description Framework) triples without requiring ontology statistical and semantical information is not addressed. Keeping in mind that ontologies are not always available and they may be incomplete or misused. In this paper, a novel free-ontology process called LODQuMa is proposed to assess and improve the quality of LOD. It is mainly based on profiling statistics, synonym relationships between predicates, QVCs (Quality Verification Cases), and SPARQL (SPARQL Protocol and RDF Query Language) query templates. Experiments on the DBpedia dataset demonstrate that the proposed process is effective for increasing the intrinsic quality dimensions, resulting in correct and compact datasets.