Abstract

A large demand exists for sensors which are capable of measuring the various gas constituents that are present in automotive exhaust. Future advancements in engine control systems and on-board diagnostics for monitoring tailpipe emissions rely critically on the development of such devices. Sensors designed to employ the principle of differential calorimetry have been identified as among the more promising candidates for near-term automotive use in the detection of hydrocarbons and other combustible species. These calorimetric devices essentially consist of two temperature sensing elements, one of which has been coated with a catalytic layer. Heat generated from oxidation of combustible species raises the temperature of the catalytically coated element relative to the other, thus providing a measure of the concentration of combustibles in the exhaust. To date, several different prototype calorimetric devices have been evaluated under laboratory and dynamometer conditions. The sensitivity of the devices tested, measured as temperature rise per concentration of combustible, has typically been about an order of magnitude less than that theoretically possible. In this paper, we examine how the choice of calorimeter design affects the optimum achievable sensitivity. Simple physical arguments are applied to explain why the sensors tested thus far have demonstrated sensitivities substantially lower than theoretical limits. We discuss an alternate design configuration which significantly improves the calorimeter sensitivity. Results are presented from an analytical model describing this calorimeter design and from a simple experiment which validates the salient features predicted by the model.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call