Comparing open-source optimisation algorithms for functionally graded material design: a thermoelastic case study
Comparing open-source optimisation algorithms for functionally graded material design: a thermoelastic case study
- Preprint Article
1
- 10.5194/egusphere-egu21-8914
- Mar 4, 2021
<p>The city of Rio de Janeiro is situated within a coastal region with steep slopes, intense seasonal rainfall, and vulnerable populations located on marginal slopes. Landslides are a seasonal challenge within the city and proximate regions and increasing real-time awareness of the hazard and exposure is paramount to saving lives and mitigating damage. A local alerting system has been developed for the city that leverages a global landslide hazard assessment for situational awareness (LHASA) framework, developed by NASA, with local rainfall thresholds and landslide susceptibility information. The LHASA-Rio system uses a decision tree approach to first identify extreme rainfall based on a series of rainfall thresholds established by Geo-Rio (the City’s agency responsible for landslide hazards) for 1 hour, 1 day or 1 hour and 4 day thresholds. This is then coupled with information on landslide susceptibility also developed by the Geo-Rio team. The LHASA-Rio system has been running operationally since 2017 within the city to provide real-time, high resolution estimates of areas within the city at higher hazard at 15-minute intervals consistent with the rainfall gauge network distributed throughout the city. Results of the LHASA-Rio system indicate excellent performance for several case studies where extreme rainfall triggered landslides within the city over areas identified as high hazard zones by LHASA-Rio. The model has recently been updated to accommodate additional rainfall thresholds to differentiate moderate to very high and critical intensities. The modeling effort is also incorporating information on landslide exposure by connecting the hazard estimates to city-wide data on population, road networks and other infrastructure. The goal of this system is ultimately to provide key tools to emergency response teams, civil protection and other hazard monitoring organizations within Rio’s City Government in real-time and provide  actionable information for key communities, city management and planning. Future work of this system is the application of a regional precipitation forecast to improve the lead time.</p><p>This work has been done in partnership through an agreement established between NASA and the City of Rio de Janeiro in 2015 that was recently extended in 2020. This agreement seeks to support innovative efforts to better understand, anticipate, and monitor hazards and environmental issues, including heavy rainfall and landslides, urban flooding, air quality and water quality in and around the city. This collaboration leverages the unique attributes of NASA's satellite data and modeling frameworks and Rio de Janeiro's management and monitoring capabilities to improve awareness of how the city of Rio may be impacted by hazards and affected by climate change. If the success of this technology is demonstrated, other cities in the world with physiographic and socioeconomic characteristics similar to Rio de Janeiro may benefit by implementing, or strengthening, their own Early Warning Systems for landslides triggered by heavy rains using LHASA's open source algorithms and the experience gathered by the use of LHASA-Rio. This presentation highlights the achievements and advancements of the LHASA-Rio system and discusses lessons learned regarding the applications of the landslide modeling systems to advance decision-relevant science at the city level.</p>
- Research Article
1
- 10.1007/s10462-025-11210-0
- May 3, 2025
- Artificial Intelligence Review
The aim of Influence Maximization (IM) in social networks is to identify an optimal subset of users to maximize the spread of influence across the network. Fair Influence Maximization (FIM) develops the IM problem with the aim of equitable distribution of influence across communities and enhancing the fair propagation of information. Among the solutions for FIM, community-based techniques enhance performance by effectively capturing the structural properties and ensuring a more equitable influence spread. However, these techniques often ignore the overlapping nature of communities and suffer from a trade-off between complexity and fairness. With this motivation, this study handles the FIM based on Overlapping Community detection under optimization algorithms (FIMOC). FIMOC includes an overlapping community detection approach that can consider the importance of influential overlapping nodes in communities. Meanwhile, FIMOC uses a non-overlapping and overlapping node selection module based on communities to identify potential candidate nodes. Subsequently, FIMOC uses the Open-Source Development Model Algorithm (ODMA) as an optimization algorithm to identify the set of influential nodes. Our method considers the dynamic and overlapping nature of social communities, ensuring that the influence spread is not only maximized but also equitably distributed across diverse groups. By leveraging real‐world social networks, we demonstrate the effectiveness of our method compared to state-of-the-art methods through extensive experiments. The results show that our method achieves a more balanced influence spread, providing a fairer solution, while also enhancing the overall reach of information dissemination.
- Conference Article
1
- 10.1109/wsc.2010.5679159
- Dec 1, 2010
This paper proposes an open-source algorithm for simulation optimization. The intent is to permit many who use a variety of simulation software codes to be able to apply the proposed methods using an MS Excel-Visual Basic interface. First, we review selected literature on simulation optimization and its usefulness. Then, we briefly discuss methods that are commonly used for simulation optimization. Next, we present the proposed Population Indifference Zone (PIZ) algorithm and related software code. Also, we discuss the properties of the proposed method and present the code that runs the Visual Basic program. Finally, we discuss the functionality of the Population Indifference Zone method with examples of problems to which it might be applied and conclude with topics for future research.
- Conference Article
1
- 10.5555/2433508.2433533
- Dec 5, 2010
This paper proposes an open-source algorithm for simulation optimization. The intent is to permit many who use a variety of simulation software codes to be able to apply the proposed methods using an MS Excel-Visual Basic interface. First, we review selected literature on simulation optimization and its usefulness. Then, we briefly discuss methods that are commonly used for simulation optimization. Next, we present the proposed Population Indifference Zone (PIZ) algorithm and related software code. Also, we discuss the properties of the proposed method and present the code that runs the Visual Basic program. Finally, we discuss the functionality of the Population Indifference Zone method with examples of problems to which it might be applied and conclude with topics for future research.
- Research Article
37
- 10.29297/orbit.v2i2.110
- Jan 1, 2019
- The ORBIT Journal
By 2030, the population living in cities will increase by an additional 1.5 billion people, placing a great strain on resources, infrastructure, jobs and healthcare (UN 2018). It has become clear that to combat this change, a number of creative approaches need to be put in place to ensure the sustainable growth of cities - one such approach is the ‘smart city’ (UN 2018). Due to the relative infancy of smart cities, and the diversity of approaches and implementations of smart information systems (Big Data and AI), many of the ethical challenges are still being defined.One of the reasons behind this challenge is a result of the varying smart information systems (SIS) being used in different urban contexts. This case study aspires to unpack some of these ethical challenges by looking at four different applications of SIS being deployed in large European cities: an AI used to understand citizens’ complaints (Amsterdam), a parking permit chat-bot (Helsinki), a platform for data exchange (Copenhagen), and a project with an open-source algorithm (Hamburg).Upon first glance, these technologies seem very disparate, but they all factor into the equation of what goes into making a smart city, ‘smart’.Over the course of the interviews, what quickly became clear was the degree to which smart cities are in their infancy, meaning that the availability and accuracy of data remains an issue in a large majority of the cases. In terms of the accuracy of recommendations – due to the early stages of smart city implementation, many projects remain wary of expanding the use of SIS, due to potential unforeseen issues and are therefore proceeding cautiously.Data has been taken on as a potentially helpful tool for citizens and planners alike to regain control and access to information within their respective cities. Consent, transparency and data ownership featured as prominent ethical considerations in all cases, especially the focus on citizens regaining control over their own data. Further, it remained a point of contention to whom the data would belong – with an overall consensus that data should remain the property of the citizen or municipality and not necessarily that of private companies.Throughout the process, it became clear that collaboration is at the heart of a successful smart city. Many of the projects utilised a collaborative public-private model to facilitate both the business development side and the citizen-engagement sides of the smart city. With differing degrees of success in the individual projects, this remained an important feature that experts believe will continue to develop in tandem with smart city projects. A bottom-up approach is clearly the most effective way to ensure that a smart city works and is used by its citizens.Overall, this case study offers valuable insights into the development of smart cities in a European context: including the use and implementation of SIS in urban environments, what kinds of ethical issues are evaluated in the literature and how they contrast and diverge from those faced by professionals in practice. It is hoped that this case study will offer practitioners, policymakers, smart city organisations, and private ICT companies interesting observations about a more ethically responsible approach towards SIS implementation in smart city projects.
- Research Article
- 10.3390/s25092953
- May 7, 2025
- Sensors (Basel, Switzerland)
To address the challenges in multi-robot collaborative SLAM, including excessive redundant computations and low processing efficiency in candidate loop closure selection during front-end loop detection, as well as high computational complexity and long iteration times due to global pose optimization in the back-end, this paper introduces several key improvements. First, a global matching and candidate loop selection strategy is incorporated into the front-end loop detection module, leveraging both LiDAR point clouds and visual features to achieve cross-robot loop detection, effectively mitigating computational redundancy and reducing false matches in collaborative multi-robot systems. Second, an improved distributed robust pose graph optimization algorithm is proposed in the back-end module. By introducing a robust cost function to filter out erroneous loop closures and employing a subgraph optimization strategy during iterative optimization, the proposed approach enhances convergence speed and solution quality, thereby reducing uncertainty in multi-robot pose association. Experimental results demonstrate that the proposed method significantly improves computational efficiency and localization accuracy. Specifically, in front-end loop detection, the proposed algorithm achieves an F1-score improvement of approximately 8.5-51.5% compared to other methods. In back-end optimization, it outperforms traditional algorithms in terms of both convergence speed and optimization accuracy. In terms of localization accuracy, the proposed method achieves an improvement of approximately 32.8% over other open source algorithms.
- Research Article
2
- 10.1007/s00216-024-05425-3
- Jul 12, 2024
- Analytical and Bioanalytical Chemistry
Feature detection plays a crucial role in non-target screening (NTS), requiring careful selection of algorithm parameters to minimize false positive (FP) features. In this study, a stochastic approach was employed to optimize the parameter settings of feature detection algorithms used in processing high-resolution mass spectrometry data. This approach was demonstrated using four open-source algorithms (OpenMS, SAFD, XCMS, and KPIC2) within the patRoon software platform for processing extracts from drinking water samples spiked with 46 per- and polyfluoroalkyl substances (PFAS). The designed method is based on a stochastic strategy involving random sampling from variable space and the use of Pearson correlation to assess the impact of each parameter on the number of detected suspect analytes. Using our approach, the optimized parameters led to improvement in the algorithm performance by increasing suspect hits in case of SAFD and XCMS, and reducing the total number of detected features (i.e., minimizing FP) for OpenMS. These improvements were further validated on three different drinking water samples as test dataset. The optimized parameters resulted in a lower false discovery rate (FDR%) compared to the default parameters, effectively increasing the detection of true positive features. This work also highlights the necessity of algorithm parameter optimization prior to starting the NTS to reduce the complexity of such datasets.Graphical Supplementary InformationThe online version contains supplementary material available at 10.1007/s00216-024-05425-3.
- Research Article
2
- 10.1186/s42492-022-00116-1
- Aug 3, 2022
- Visual Computing for Industry, Biomedicine, and Art
Pancreatoscopy plays a significant role in the diagnosis and treatment of pancreatic diseases. However, the risk of pancreatoscopy is remarkably greater than that of other endoscopic procedures, such as gastroscopy and bronchoscopy, owing to its severe invasiveness. In comparison, virtual pancreatoscopy (VP) has shown notable advantages. However, because of the low resolution of current computed tomography (CT) technology and the small diameter of the pancreatic duct, VP has limited clinical use. In this study, an optimal path algorithm and super-resolution technique are investigated for the development of an open-source software platform for VP based on 3D Slicer. The proposed segmentation of the pancreatic duct from the abdominal CT images reached an average Dice coefficient of 0.85 with a standard deviation of 0.04. Owing to the excellent segmentation performance, a fly-through visualization of both the inside and outside of the duct was successfully reconstructed, thereby demonstrating the feasibility of VP. In addition, a quantitative analysis of the wall thickness and topology of the duct provides more insight into pancreatic diseases than a fly-through visualization. The entire VP system developed in this study is available at https://github.com/gaoyi/VirtualEndoscopy.git.
- Research Article
- 10.1177/23998083251369143
- Sep 3, 2025
- Environment and Planning B: Urban Analytics and City Science
Kevin Lynch’s concept of imageability describes how effectively an environment evokes a mental image in an observer’s mind, which consists of three components—“identity, structure, and meaning”—with the first two being the main components to build Lynch’s cognitive map. Although imageability has significantly influenced urban design and planning, and inspired numerous subsequent research, the “meaning” component has not been clearly studied. The rise of new urban data, particularly the booming availability of reviews of urban spaces on platforms such as TripAdvisor and Google, offers a valuable opportunity to incorporate the meaning into the imageability study. By adapting several open-source algorithms, this research efficiently extracts both objective (e.g. location, number of reviews) and subjective (e.g. ratings, review text) information from the online platform, proposing a novel approach to studying the meaning component through a fine-tuned BERT model. These data and methods enable this research to capture and categorize the meaning component for describing the image of the city, using Singapore as a case study. The results show that: (1) Lynch’s cognitive mapping approach could potentially be enhanced by incorporating the meaning into the study of imageability, it could amplify the existing nodes or landmarks, and create new “nodes”. (2) The proposed “meaning patch” could add new layers to structure of the city image by representing the shared meanings of multiple places, suggesting the potential to be studied as the sixth element to extend the existing imageability framework, and open new agenda for the future studies.
- Research Article
40
- 10.7717/peerj.9258
- May 27, 2020
- PeerJ
The resources available for conserving biodiversity are limited, and so protected areas need to be established in places that will achieve objectives for minimal cost. Two of the main algorithms for solving systematic conservation planning problems are Simulated Annealing (SA) and exact integer linear programing (EILP) solvers. Using a case study in BC, Canada, we compare the cost-effectiveness and processing times of SA used in Marxan versus EILP using both commercial and open-source algorithms. Plans for expanding protected area systems based on EILP algorithms were 12–30% cheaper than plans using SA, due to EILP’s ability to find optimal solutions as opposed to approximations. The best EILP solver we examined was on average 1,071 times faster than the SA algorithm tested. The performance advantages of EILP solvers were also observed when we aimed for spatially compact solutions by including a boundary penalty. One practical advantage of using EILP over SA is that the analysis does not require calibration, saving even more time. Given the performance of EILP solvers, they can be used to generate conservation plans in real-time during stakeholder meetings and can facilitate rapid sensitivity analysis, and contribute to a more transparent, inclusive, and defensible decision-making process.
- Research Article
- 10.1504/ijcaet.2024.10063825
- Jan 1, 2024
- International Journal of Computer Aided Engineering and Technology
Comparing open-source optimisation algorithms for functionally graded material design: a thermoelastic case study
- Research Article
1
- 10.2308/ciia-10787
- Mar 1, 2022
- Current Issues in Auditing
This issue of Current Issues in Auditing includes a Special Forum of collaborations between academics and practitioners. Four articles are featured. The first article, “Robotic Process Automation for the Extraction of Audit Information: A Use Case,” includes co-authors from PricewaterhouseCoopers and describes how an open-source algorithm using Python can be an effective and efficient tool for extraction of audit evidence (Bellinga, Bosman, Höcük, Janssen, and Khzam 2022). The second article, “Greater Than the Sum of Its Parts: Collaborating for Diversity,” includes co-authors from EY and RSM, and offers a case study on how academics can translate the results of research into industry action (Dey, Lim, Ross, Walker, and Bouyer 2022). The third article, “The Structure of State Auditor Functions in the Fight Against Corruption,” includes a co-author from LWG CPA & Advisors (Flasher, Shirley, and Higgins 2022). It investigates differences in the effectiveness of fraud deterrence and detection efforts and finds support for combining responsibilities for financial statement audits and fraud investigations with state auditors. The fourth article, “Woman-to-Woman Workplace Bullying in the Audit Field,” presents evidence of social aggression based on semi-structured interviews of women auditing practitioners (Tribou and Kidd 2022). The article includes recommendations for practice and the academy to address communication and interpersonal issues beyond intragender bullying.These original research articles are examples that are consistent with CIIA's objective of “advancing the dialogue between academics and practitioners on current issues facing the auditing practice community.” We believe that collaborating both informally (e.g., conversations and courtesy reads) and formally (e.g., as participants and co-authors) with practitioners helps improve the validity, reliability, and generalizability of auditing research. Collaborations also increase the likelihood that academic research will have impact.Beyond collaboration, to have impact investigations must be of issues important to practitioners. To increase the likelihood that CIIA is publishing practice-relevant studies and summaries, the title and abstract of all submissions are read by at least one audit partner prior to advancing the submission to the formal review process. Authors uncertain about whether their studies are appropriate for CIIA are encouraged to send the Academic Co-editor a short summary of their planned research project, or the abstract of a previously published article, for pre-submission feedback. Feedback from a partner-practitioner will be provided within 14 days.The most recent issues of CIIA include only articles that have passed the practitioner screening process, and most reviewer teams included a practitioner-member of CIIA's Editorial Board. We are particularly grateful to the practitioners and firms that have committed their time and expertise: BDO; Crowe; Deloitte; EY; Focal Point; Grant Thornton; Jeanette M. Franzel, CPA; KPMG; Protiviti; and PwC.Topics covered by articles in this issue include:On October 1, 2021, we issued a Call for Papers on Environmental, Social, and Governance (ESG) disclosures and assurance, a topic that practitioners tell us is likely to have implications for the auditing practice for the foreseeable future. We hope to publish several rigorous, unbiased investigations as a Special Forum in the Spring 2023 issue of CIIA. Ideas for possible studies include, but are not limited to:We are grateful to the academic- and practitioner-authors who have given us the privilege of considering their work. We are also grateful to members of the Editorial Board and ad hoc reviewers who have performed diligent and timely reviews to help authors and aid the editorial decision-making process. We are indebted to the practitioners who support the accounting academy by participating in our studies and reading the results of our efforts.
- Research Article
11
- 10.1371/journal.pone.0259916
- Nov 16, 2021
- PLOS ONE
BackgroundAtrial fibrillation (AFib) is the most common cardiac arrhythmia associated with stroke, blood clots, heart failure, coronary artery disease, and/or death. Multiple methods have been proposed for AFib detection, with varying performances, but no single approach appears to be optimal. We hypothesized that each state-of-the-art algorithm is appropriate for different subsets of patients and provides some independent information. Therefore, a set of suitably chosen algorithms, combined in a weighted voting framework, will provide a superior performance to any single algorithm.MethodsWe investigate and modify 38 state-of-the-art AFib classification algorithms for a single-lead ambulatory electrocardiogram (ECG) monitoring device. All algorithms are ranked using a random forest classifier and an expert-labeled training dataset of 2,532 recordings. The seven top-ranked algorithms are combined by using an optimized weighting approach.ResultsThe proposed fusion algorithm, when validated on a separate test dataset consisting of 4,644 recordings, resulted in an area under the receiver operating characteristic (ROC) curve of 0.99. The sensitivity, specificity, positive-predictive-value (PPV), negative-predictive-value (NPV), and F1-score of the proposed algorithm were 0.93, 0.97, 0.87, 0.99, and 0.90, respectively, which were all superior to any single algorithm or any previously published.ConclusionThis study demonstrates how a set of well-chosen independent algorithms and a voting mechanism to fuse the outputs of the algorithms, outperforms any single state-of-the-art algorithm for AFib detection. The proposed framework is a case study for the general notion of crowdsourcing between open-source algorithms in healthcare applications. The extension of this framework to similar applications may significantly save time, effort, and resources, by combining readily existing algorithms. It is also a step toward the democratization of artificial intelligence and its application in healthcare.
- Research Article
20
- 10.1016/j.ijleo.2018.10.073
- Oct 13, 2018
- Optik
Open-source optimization algorithms for optical design
- Research Article
3
- 10.1007/s11750-024-00683-x
- Sep 17, 2024
- TOP
The paper aims to investigate the impact of the optimization algorithms on the training of deep neural networks with an eye to the interaction between the optimizer and the generalization performance. In particular, we aim to analyze the behavior of state-of-the-art optimization algorithms in relationship to their hyperparameters setting to detect robustness with respect to the choice of a certain starting point in ending on different local solutions. We conduct extensive computational experiments using nine open-source optimization algorithms to train deep Convolutional Neural Network architectures on an image multi-class classification task. Precisely, we consider several architectures by changing the number of layers and neurons per layer, to evaluate the impact of different width and depth structures on the computational optimization performance. We show that the optimizers often return different local solutions and highlight the strong correlation between the quality of the solution found and the generalization capability of the trained network. We also discuss the role of hyperparameters tuning and show how a tuned hyperparameters setting can be re-used for the same task on different problems achieving better efficiency and generalization performance than a default setting.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.