Abstract

For many years, it was believed that when two isovalent semiconductors are mixed, they will phase-separate (like oil and water) at low temperature, they will form a solid solution (like gin and tonic) at high temperatures, but they will never produce ordered atomic arrangements. This view was based on the analysis of the solid-liquid equilibria at high temperatures and on empirical observation of phase separation at low temperatures. These observations were further rationalized and legitimized by applying the classic (Hildebrand) solution models, which predicted just this type of behavior. These models showed that the observed behavior of the AxB1 − x alloys implied a positive excess enthalpy ΔH(x) = E(x) − xE(A) − (1 − x)E(B) (where E is the total energy) and that this positiveness (“repulsive A-B interactions”) resulted from the strain energy attendant upon packing two solids with dissimilar lattice constants. The larger the lattice mismatch, the more difficult it was to form the alloy. Common to these approaches (“regular solution theory,” “quasiregular solution theory,” “delta lattice-parameter model,” etc.) was the assumption that the enthalpy ΔH(x) of an alloy depends on its global composition x but not on the microscopic arrangement of atoms (e.g., ordered versus disordered). Thus, ordered and disordered configurations at the same compo sition x were tacitly assumed to have the same excess enthalpy ΔH(x). Clearly the option for ordering was eliminated at the outset. While these theories served to produce very useful depictions of the immiscibility of many semiconductor alloys (and continue to guide strategies of crystal growth), they also cemented the paradigm that semiconductor alloys don't order, they just phase-separate. This was true, at the time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call