Estimation of peas shrinkage rate during drying in a fluidized bed: a nonlinear estimator approach
Estimation of peas shrinkage rate during drying in a fluidized bed: a nonlinear estimator approach
- Research Article
47
- 10.1016/j.ces.2008.05.035
- May 30, 2008
- Chemical Engineering Science
Effect of agglomerate properties on agglomerate stability in fluidized beds
- Research Article
31
- 10.1016/j.ces.2007.07.027
- Jul 22, 2007
- Chemical Engineering Science
Control of the fluidised bed in the pellet softening process
- Dissertation
- 10.26180/5c463cc0c476c
- Mar 21, 2019
Drying is the most important step for post-harvest processing of food grains. This project developed a food grain drying model by combining computational fluid dynamics and discrete element method in a fluidised bed. The model incorporated heat transfer sub-models, a water evaporation sub-model in resemblance to a chemical reaction and a shrinkage sub-model. The model is first tested against experimental data from literature in terms of drying and volume shrinkage curves. Then the effects of operating parameters, grain properties and bed geometries on drying and shrinkage rates and dried product quality are quantified and discussed with the grain scale information.
- Research Article
34
- 10.1205/cherd.05089
- Nov 1, 2005
- Chemical Engineering Research and Design
Modelling Particle Contacts in Distinct Element Simulations: Linear and Non-Linear Approach
- Dissertation
- 10.14264/uql.2019.550
- Mar 10, 2001
This study extended the knowledge and understanding of the physical changes, fluidisation properties and drying behaviour of large food particulate during fluidised bed drying. The heat pump dehumidifier system was used to supply low temperature drying air suitable for the drying of food particulates.Food particles come in a variety of shapes and sizes, and are often comprised of larger particles of irregular geometry. In this study natural food materials were selected to make regular particles needed for experimentation. This provided sufficient control of shape and size to enable results to be interpreted, while being able to observe to changes that take place in real food materials. Three geometrical shapes cylindrical, parallelepiped and spherical were taken into consideration. Green beans, potato and peas were chosen as the representative food materials respectively to make required shapes. In the case of beans three length:diameter ratios 3:1, 2:1 and 1:1 were considered. For potato three aspect ratios 3:1, 2:1 and 1:1 were taken. In the case of peas only one size spherical particles were used.For the fluidisation study at various levels of moisture contents, the drying experiments were conducted at 50 p 2o C, with 15 p 5 % relative humidity and a hot air velocity of 3 m/s using a heat pump dehumidifier system. Fluidisation experiments were undertaken for the bed heights of 100, 80, 60 and 40 mm and at 10 moisture levels. Minimum fluidisation velocity for each height and moisture content was observed.Minimum fluidisation velocity decreased with the decreasing moisture content as well as with decreasing bed height. Empirical relationships were developed for the change of minimum fluidisation velocity with moisture content during drying. The Generalized equation was also used to calculate minimum fluidisation velocity.The minimum fluidisation velocity (Umf) of green beans with change in moisture content was predicted with an empirical model Umf = A + B e -Cm with a satisfactory fit for L:D = 1:1. This model could not be applied to L:D = 2:1 and 3:1 due to irregularities in the variation of minimum fluidisation velocity with the reduction in moisture. The calculated Umf using the Generalized equation based on the dimensional changes of the beans during drying can be applied to predict minimum fluidisation velocity for all L:D ratios with a reasonable accuracy. The minimum fluidisation velocity of potato could not be fitted to any available empirical model satisfactorily. This was due to the irregular behaviour of minimum fluidisation velocity with change in moisture content. However, the general trend was the reduction in Umf as the moisture content decreased. The Umf predicted using the Generalized equation was valid only for aspect ratios of 1:1 and 2:1. Minimum fluidisation velocity behaviour of peas during drying was best fitted with linear equation of the form Umf = A + B m for all the bed heights. It was found that, Umf predictions for peas were success with both the Generalized equation and the conventional Ergun equation.The terminal velocity of the beans, potato and peas were calculated and were found to be well above the minimum fluidisation velocity (nearly eight times) at every moisture content during drying. Hence, operational velocity of the fluidised bed could be kept at a constant value for whole drying period.To undertake study on the physical changes during drying, the drying experiments were conducted at 50o C, 40o C and 30o C with 15 p 5% RH in a batch fluidised bed dryer. Hot air was supplied by a heat pump dehumidifier system coupled to the fluidised bed dryer. Empirical models were developed for shrinkage, bulk density, particle density and bed porosity during fluidised bed drying.Shrinkage was examined during drying relating volume ratio to moisture ratio using a linear model of the form VR = A + B MR for potato and peas. In contrast to the peas and potato, beans shrinkage was modelled using a non-linear exponential of the form VR = 1- B e -KMR. This exponential behaviour was attributed to the open heterogenous structure of beans. The shrinkage rate constant K in beans decreased with the increased drying temperature. The effect of L:D ratio was not significant. The constant B was similar for all drying temperatures and for all L: D ratios. In the case of potato none of A and B parameters in the shrinkage equation were significantly different with change in temperature or size.n n n
- Research Article
40
- 10.1109/titb.2009.2027317
- Jul 28, 2009
- IEEE transactions on information technology in biomedicine : a publication of the IEEE Engineering in Medicine and Biology Society
The only established technique for intracranial pressure (ICP) measurement is an invasive procedure requiring surgically penetrating the skull for placing pressure sensors. However, there are many clinical scenarios where a noninvasive assessment of ICP is highly desirable. With an assumption of a linear relationship among arterial blood pressure (ABP), ICP, and flow velocity (FV) of major cerebral arteries, an approach has been previously developed to estimate ICP noninvasively, the core of which is the linear estimation of the coefficients f between ABP and ICP from the coefficients w calculated between ABP and FV. In this paper, motivated by the fact that the relationships among these three signals are so complex that simple linear models may be not adequate to depict the relationship between these two coefficients, i.e., f and w , we investigate the adoption of several nonlinear kernel regression approaches, including kernel spectral regression (KSR) and support vector machine (SVM) to improve the original linear ICP estimation approach. The ICP estimation results on a dataset consisting of 446 entries from 23 patients show that the mean ICP error by the nonlinear approaches can be reduced to below 6.0 mmHg compared to 6.7 mmHg of the original approach. The statistical test also demonstrates that the ICP error by the proposed nonlinear kernel approaches is statistically smaller than that estimated with the original linear model (p < 0.05). The current result confirms the potential of using nonlinear regression to achieve more accurate noninvasive ICP assessment.
- Book Chapter
1
- 10.5772/15263
- Apr 26, 2011
Many systems in the real world are more accurately described by nonlinear models. Since the original work of Kalman (Kalman, 1960; Kalman & Busy, 1961), which introduces the Kalman filter for linear models, extensive research has been going on state estimation of nonlinear models; but there do not yet exist any optimum estimation approaches for all nonlinear models, except for certain classes of nonlinear models; on the other hand, different suboptimum nonlinear estimation approaches have been proposed in the literature (Daum, 2005). These suboptimum approaches produce estimates by using some sorts of approximations for nonlinear models. The performances and implementation complexities of these suboptimum approaches surely depend upon the types of approximations which are used for nonlinear models. Model approximation errors are an important parameter which affects the performances of suboptimum estimation approaches. The performance of a nonlinear suboptimum estimation approach is better than the other estimation approaches for specific models considered, that is, the performance of a suboptimum estimation approach is model-dependent. The most commonly used recursive nonlinear estimation approaches are the extended Kalman filter (EKF) and particle filters. The EKF linearizes nonlinear models by Taylor series expansion (Sage & Melsa, 1971) and the unscented Kalman filter (UKF) approximates a posteriori densities by a set of weighted and deterministically chosen points (Julier, 2004). Particle filters approximates a posterior densities by a large set of weighted and randomly selected points (called particles) in the state space (Arulampalam et al., 2002; Doucet et al., 2001; Ristic et al., 2004). In the nonlinear estimation approaches proposed in (Demirbas, 1982; 1984; Demirbas & Leondes, 1985; 1986; Demirbas, 1988; 1989; 1990; 2007; 2010): the disturbance noise and initial state are first approximated by a discrete noise and a discrete initial state whose distribution functions the best approximate the distribution functions of the disturbance noise and initial state, states are quantized, and then multiple hypothesis testing is used for state estimation; whereas Grid-based approaches approximate a posteriori densities by discrete densities, which are determined by predefined gates (cells) in the predefined state space; if the state space is not finite in extent, then the state space necessitates some truncation of the state space; and grid-based estimation approaches assume the availability of the state Real-time Recursive State Estimation for Nonlinear Discrete Dynamic Systems with Gaussian or non-Gaussian Noise 1
- Research Article
16
- 10.1016/j.foreco.2013.04.034
- Jun 4, 2013
- Forest Ecology and Management
Towards non-destructive estimation of tree age
- Conference Article
1
- 10.1109/igcc.2014.7039145
- Nov 1, 2014
Data centers as a cost-effective infrastructure for hosting Cloud and Grid applications incur tremendous energy cost and C02 emissions in terms of power distribution and cooling. One of the effective approaches for saving energy in a cluster environment is workload consolidation. However, it is challenging to address this schedule problem as it requires the understanding of various cost factors. One of the important factors is the estimation of power consumption. Power models used in most of workload schedule solutions are a linear function of resource features, but we analysed the measurement data from our cluster and found the resource loads, in particular I/O load, had no convincing linear-correlation with power consumption. Based on measurement data sets from our cluster, we propose multiple non-linear machine learning approaches to estimate power consumption of an entire node using OS-reported resource features. We evaluate the accuracy, portability and usability of the linear and non-linear approaches. Our work shows the multiple-variable linear regression approach is more precise than the CPU only linear approach. The neural network approaches have a slight advantage - its mean root mean square error is at most 15% less than that of the multiple-variable linear approach. But the neural network models have worse portability when the models generated on a node are applied on its homogeneous nodes. Gaussian Mixture Model has the highest accuracy on Hadoop nodes but requires the longest training time.
- Research Article
- 10.1121/10.0018971
- Mar 1, 2023
- The Journal of the Acoustical Society of America
Time-domain estimates of the shear wave speed are obtained by performing cross-correlations or by evaluating the time-to-peak for two laterally separated shear wave particle velocity waveforms, where the distance divided by the propagation time yields an approximate value for the shear wave speed. Challenges associated with these time-domain estimation approaches include increasing errors as the shear viscosity increases and the absence of an estimated value for the shear viscosity parameter which is responsible for the rapid attenuation of shear waves in soft tissue. To obtain a time-domain estimate of the shear viscosity that also provides an estimate for the shear wave speed, a nonlinear least squares estimation approach is applied to three-dimensional (3D) shear wave particle velocities computed for a shear elasticity of 1.5 kPa, shear viscosities of 1, 2, 3, and 4 Pa.s, and a realistic simulated 3D acoustic radiation force excitation. The results show cross-correlations tend to over-estimate the value of the shear wave speed, the nonlinear least squares approach tends to under-estimate the value of the shear wave speed, and the nonlinear least square approach produces more accurate estimates as the shear viscosity increases. Two-dimensional maps of each estimated parameter are also shown.
- Research Article
43
- 10.1016/j.powtec.2019.10.021
- Oct 8, 2019
- Powder Technology
CFD-DEM study of the effects of food grain properties on drying and shrinkage in a fluidised bed
- Research Article
11
- 10.1114/1.1510449
- Sep 1, 2002
- Annals of biomedical engineering
A stochastic interpretation of Tikhonov regularization has been recently proposed to attack some open problems of deconvolution when dealing with physiological systems, i.e., in addition to ill-conditioning, infrequent and nonuniform sampling and necessity of having credible confidence intervals. However, the possible violation of the non-negativity constraint cannot be dealt with on firm statistical grounds, since the model of the unknown signal is compatible with negative realizations. In this paper, we propose a new model of the unknown input which excludes negative values. The model is embedded within a Bayesian estimation framework to calculate, by resorting to a Markov chain Monte Carlo algorithm, a nonlinear estimate of the unknown input given by its a posteriori expected value. Applications to simulated and real hormone secretion/pharmacokinetic problems are presented which show that this nonlinear approach is more accurate than the linear one. In addition, more realistic confidence intervals are obtained.
- Research Article
7
- 10.1016/j.cej.2023.147151
- Nov 4, 2023
- Chemical Engineering Journal
Hydrogen-rich syngas production and insight into morphology-kinetics correlation for furfural residue steam gasification in a bubbling fluidized bed
- Research Article
- 10.1080/10543406.2024.2421424
- Nov 8, 2024
- Journal of Biopharmaceutical Statistics
Dose–response relationships are important in assessing the efficacy and potency of compounds, which can usually be characterized by a 4-parameter logistic (4-PL) model estimating EC50, slope factor, lower asymptote, and upper asymptote. EC50, the concentration of a compound that induces a response halfway between the baseline and maximum, is a key quantity to evaluate compound potency. For multi-donor dose–response data, it is often of interest to estimate the overall EC50 (i.e. the average EC50 of the population of donors) and its 95% confidence interval (CI). A few multi-donor EC50 estimation methods have been proposed in the literature. Jiang and Kopp-Schneider (2014) systematically compared the meta-analysis approach and the nonlinear mixed-effects approach and concluded that the meta-analysis approach is simple and robust to summarize EC50 estimates from multiple experiments, especially suited in the case of a small number of experiments, while the nonlinear mixed-effects approach has the issue of convergence failures probably due to overparameterization. In this article, we propose a modification of the nonlinear mixed-effects approach by using the stochastic approximation expectation-maximization (SAEM) algorithm to estimate model parameters and using multiple starting points to search for globally optimal values, which can substantially alleviate the issue of convergence failures even for small number of donors (e.g. n = 3), and achieve a smaller absolute median bias and better coverage probability of 95% confidence interval than the meta-analysis approach when the number of donors is not too small (e.g. n ≥ 7).
- Research Article
7
- 10.1016/s0165-1684(01)00103-7
- Aug 30, 2001
- Signal Processing
Effective nonlinear approach for optical flow estimation
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.