Abstract

This research aims to elaborate the difficulty levels of three different texts that bring the same topic. This research is a discourse analysis which was done by analyzing the lexical density, nominalization, and the finiteness of the texts. The three texts that have been analyzed were taken online from Wikipedia and two personal blogs for English research. The results show that the first text can be taken as the most complex text for high level readers, the second text for the intermediate level readers, and the third text for the elementary or low level readers. In terms of lexical density, the first text gains very high percentage which is up to 60%, this shows that the text is the most informative of all. Whereas, the second text and the third text’s lexical density are both 50%, which indicates that there are lack of contents in them. Regarding to nominalization, the first text is still on the highest level with 12 nominalizations, the second text is on the intermediate level with 10 nominalizations, and the third text is on the lowest level, without any nominalization. The last is from the finiteness side. The first text has the lowest number of finiteness; the second text has the second highest number of finites, whereas the third text has the highest number of finites of all. This is the result of the highest number of lexical density and nominalization of the first text that decreases the frequency of sentences in it. The results of this research can be useful for online readers to decide what kind of reading materials which are suitable for their English levels.

Highlights

  • There are many ways can be done in researching a social problem, and discourse analysis is one method that can be done

  • The high and the low levels of lexical density can be determined by calculating how many lexical items are contained in the text

  • The rule which is used to measure the lexical density of the texts is: Lexical density= The Number of Lexical Items x 100 The Total Words

Read more

Summary

Introduction

There are many ways can be done in researching a social problem, and discourse analysis is one method that can be done. Johansson (2008) states that lexical density is the term which is most often used for describing the proportion of content words (nouns, verbs, adjectives, and adverbs) to the total number of words. The higher the level of lexical density in the text indicates the higher the content of meaning is contained in the text. The high and the low levels of lexical density can be determined by calculating how many lexical items are contained in the text. The lexical items are the types of words that need to be identified, such as: verbs, adverbs, adjectives, and nouns. The rule which is used to measure the lexical density of the texts is: Lexical density= The Number of Lexical Items x 100 The Total Words

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call