A novel multi-scale feature fusion network for tile spalling segmentation in building exterior

  • Abstract
  • Literature Map
  • References
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

A novel multi-scale feature fusion network for tile spalling segmentation in building exterior

ReferencesShowing 10 of 58 papers
  • Open Access Icon
  • Cite Count Icon 103
  • 10.3389/fnbot.2022.881021
Multi-Scale Feature Fusion Convolutional Neural Network for Indoor Small Target Detection.
  • May 19, 2022
  • Frontiers in Neurorobotics
  • Li Huang + 7 more

  • Open Access Icon
  • Cite Count Icon 775
  • 10.1109/cvpr.2018.00464
The Lovasz-Softmax Loss: A Tractable Surrogate for the Optimization of the Intersection-Over-Union Measure in Neural Networks
  • Jun 1, 2018
  • Maxim Berman + 2 more

  • Cite Count Icon 84
  • 10.1111/mice.12449
Pruning deep convolutional neural networks for efficient edge computing in condition assessment of infrastructures
  • May 21, 2019
  • Computer-Aided Civil and Infrastructure Engineering
  • Rih‐Teng Wu + 5 more

  • Open Access Icon
  • Cite Count Icon 33
  • 10.1061/(asce)cf.1943-5509.0001652
Building and Infrastructure Defect Detection and Visualization Using Drone and Deep Learning Technologies
  • Dec 1, 2021
  • Journal of Performance of Constructed Facilities
  • Yuhan Jiang + 2 more

  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 285
  • 10.1186/s13104-022-06096-y
Towards a guideline for evaluation metrics in medical image segmentation
  • Jun 20, 2022
  • BMC research notes
  • Dominik Müller + 2 more

  • Open Access Icon
  • Cite Count Icon 247
  • 10.1016/j.neucom.2019.07.006
USE-Net: Incorporating Squeeze-and-Excitation blocks into U-Net for prostate zonal segmentation of multi-institutional MRI datasets
  • Jul 23, 2019
  • Neurocomputing
  • Leonardo Rundo + 14 more

  • Cite Count Icon 1402
  • 10.1109/cvpr.2014.119
The Role of Context for Object Detection and Semantic Segmentation in the Wild
  • Jun 1, 2014
  • Roozbeh Mottaghi + 7 more

  • Cite Count Icon 30
  • 10.1061/(asce)cp.1943-5487.0000982
Semantic Deep Learning Integrated with RGB Feature-Based Rule Optimization for Facility Surface Corrosion Detection and Evaluation
  • Nov 1, 2021
  • Journal of Computing in Civil Engineering
  • Atiqur Rahman + 2 more

  • Open Access Icon
  • Cite Count Icon 545
  • 10.1109/iccv.2019.00926
Expectation-Maximization Attention Networks for Semantic Segmentation
  • Oct 1, 2019
  • Xia Li + 5 more

  • Cite Count Icon 87
  • 10.1016/j.autcon.2021.103786
Semi-supervised semantic segmentation network for surface crack detection
  • May 30, 2021
  • Automation in Construction
  • Wenjun Wang + 1 more

Similar Papers
  • PDF Download Icon
  • Research Article
  • Cite Count Icon 14
  • 10.3390/rs13020328
High-Resolution SAR Image Classification Using Multi-Scale Deep Feature Fusion and Covariance Pooling Manifold Network
  • Jan 19, 2021
  • Remote Sensing
  • Wenkai Liang + 4 more

The classification of high-resolution (HR) synthetic aperture radar (SAR) images is of great importance for SAR scene interpretation and application. However, the presence of intricate spatial structural patterns and complex statistical nature makes SAR image classification a challenging task, especially in the case of limited labeled SAR data. This paper proposes a novel HR SAR image classification method, using a multi-scale deep feature fusion network and covariance pooling manifold network (MFFN-CPMN). MFFN-CPMN combines the advantages of local spatial features and global statistical properties and considers the multi-feature information fusion of SAR images in representation learning. First, we propose a Gabor-filtering-based multi-scale feature fusion network (MFFN) to capture the spatial pattern and get the discriminative features of SAR images. The MFFN belongs to a deep convolutional neural network (CNN). To make full use of a large amount of unlabeled data, the weights of each layer of MFFN are optimized by unsupervised denoising dual-sparse encoder. Moreover, the feature fusion strategy in MFFN can effectively exploit the complementary information between different levels and different scales. Second, we utilize a covariance pooling manifold network to extract further the global second-order statistics of SAR images over the fusional feature maps. Finally, the obtained covariance descriptor is more distinct for various land covers. Experimental results on four HR SAR images demonstrate the effectiveness of the proposed method and achieve promising results over other related algorithms.

  • Research Article
  • 10.3233/jifs-237963
MTC-Net: Multi-scale feature fusion network for medical image segmentation
  • Apr 18, 2024
  • Journal of Intelligent & Fuzzy Systems
  • Shujun Ren + 1 more

Image segmentation is critical in medical image processing for lesion detection, localisation, and subsequent diagnosis. Currently, computer-aided diagnosis (CAD) has played a significant role in improving diagnostic efficiency and accuracy. The segmentation task is made more difficult by the hazy lesion boundaries and uneven forms. Because standard convolutional neural networks (CNNs) are incapable of capturing global contextual information, adequate segmentation results are impossible to achieve. We propose a multiscale feature fusion network (MTC-Net) in this paper that integrates deep separable convolution and self-attentive modules in the encoder to achieve better local continuity of images and feature maps. In the decoder, a multi-branch multi-scale feature fusion module (MSFB) is utilized to improve the network’s feature extraction capability, and it is integrated with a global cooperative aggregation module (GCAM) to learn more contextual information and adaptively fuse multi-scale features. To develop rich hierarchical representations of irregular forms, the suggested detail enhancement module (DEM) adaptively integrates local characteristics with their global dependencies. To validate the effectiveness of the proposed network, we conducted extensive experiments, evaluated on the public datasets of skin, breast, thyroid and gastrointestinal tract with ISIC2018, BUSI, TN3K and Kvasir-SEG. The comparison with the latest methods also verifies the superiority of our proposed MTC-Net in terms of accuracy. Our code on https://github.com/gih23/MTC-Net.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 6
  • 10.3390/biomedicines11061733
MSF-Net: A Lightweight Multi-Scale Feature Fusion Network for Skin Lesion Segmentation.
  • Jun 16, 2023
  • Biomedicines
  • Dangguo Shao + 2 more

Segmentation of skin lesion images facilitates the early diagnosis of melanoma. However, this remains a challenging task due to the diversity of target scales, irregular segmentation shapes, low contrast, and blurred boundaries of dermatological graphics. This paper proposes a multi-scale feature fusion network (MSF-Net) based on comprehensive attention convolutional neural network (CA-Net). We introduce the spatial attention mechanism in the convolution block through the residual connection to focus on the key regions. Meanwhile, Multi-scale Dilated Convolution Modules (MDC) and Multi-scale Feature Fusion Modules (MFF) are introduced to extract context information across scales and adaptively adjust the receptive field size of the feature map. We conducted many experiments on the public data set ISIC2018 to verify the validity of MSF-Net. The ablation experiment demonstrated the effectiveness of our three modules. The comparison experiment with the existing advanced network confirms that MSF-Net can achieve better segmentation under fewer parameters.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 88
  • 10.3390/rs12050872
Multi-scale Adaptive Feature Fusion Network for Semantic Segmentation in Remote Sensing Images
  • Mar 9, 2020
  • Remote Sensing
  • Ronghua Shang + 5 more

Semantic segmentation of high-resolution remote sensing images is highly challenging due to the presence of a complicated background, irregular target shapes, and similarities in the appearance of multiple target categories. Most of the existing segmentation methods that rely only on simple fusion of the extracted multi-scale features often fail to provide satisfactory results when there is a large difference in the target sizes. Handling this problem through multi-scale context extraction and efficient fusion of multi-scale features, in this paper we present an end-to-end multi-scale adaptive feature fusion network (MANet) for semantic segmentation in remote sensing images. It is a coding and decoding structure that includes a multi-scale context extraction module (MCM) and an adaptive fusion module (AFM). The MCM employs two layers of atrous convolutions with different dilatation rates and global average pooling to extract context information at multiple scales in parallel. MANet embeds the channel attention mechanism to fuse semantic features. The high- and low-level semantic information are concatenated to generate global features via global average pooling. These global features are used as channel weights to acquire adaptive weight information of each channel by the fully connected layer. To accomplish an efficient fusion, these tuned weights are applied to the fused features. Performance of the proposed method has been evaluated by comparing it with six other state-of-the-art networks: fully convolutional networks (FCN), U-net, UZ1, Light-weight RefineNet, DeepLabv3+, and APPD. Experiments performed using the publicly available Potsdam and Vaihingen datasets show that the proposed MANet significantly outperforms the other existing networks, with overall accuracy reaching 89.4% and 88.2%, respectively and with average of F1 reaching 90.4% and 86.7% respectively.

  • Research Article
  • Cite Count Icon 6
  • 10.1088/1361-6560/acc71f
Regional perception and multi-scale feature fusion network for cardiac segmentation
  • May 2, 2023
  • Physics in Medicine & Biology
  • Chenggang Lu + 5 more

Objective. Cardiovascular disease (CVD) is a group of diseases affecting cardiac and blood vessels, and short-axis cardiac magnetic resonance (CMR) images are considered the gold standard for the diagnosis and assessment of CVD. In CMR images, accurate segmentation of cardiac structures (e.g. left ventricle) assists in the parametric quantification of cardiac function. However, the dynamic beating of the heart renders the location of the heart with respect to other tissues difficult to resolve, and the myocardium and its surrounding tissues are similar in grayscale. This makes it challenging to accurately segment the cardiac images. Our goal is to develop a more accurate CMR image segmentation approach. Approach. In this study, we propose a regional perception and multi-scale feature fusion network (RMFNet) for CMR image segmentation. We design two regional perception modules, a window selection transformer (WST) module and a grid extraction transformer (GET) module. The WST module introduces a window selection block to adaptively select the window of interest to perceive information, and a windowed transformer block to enhance global information extraction within each feature window. The WST module enhances the network performance by improving the window of interest. The GET module grids the feature maps to decrease the redundant information in the feature maps and enhances the extraction of latent feature information of the network. The RMFNet further introduces a novel multi-scale feature extraction module to improve the ability to retain detailed information. Main results. The RMFNet is validated with experiments on three cardiac data sets. The results show that the RMFNet outperforms other advanced methods in overall performance. The RMFNet is further validated for generalizability on a multi-organ data set. The results also show that the RMFNet surpasses other comparison methods. Significance. Accurate medical image segmentation can reduce the stress of radiologists and play an important role in image-guided clinical procedures.

  • Research Article
  • 10.3390/agronomy14112675
A Recognition Model Based on Multiscale Feature Fusion for Needle-Shaped Bidens L. Seeds
  • Nov 14, 2024
  • Agronomy
  • Zizhao Zhang + 9 more

To solve the problem that traditional seed recognition methods are not completely suitable for needle-shaped seeds, such as Bidens L., in agricultural production, this paper proposes a model construction idea that combines the advantages of deep residual models in extracting high-level abstract features with multiscale feature extraction fusion, taking into account the depth and width of the network. Based on this, a multiscale feature fusion deep residual network (MSFF-ResNet) is proposed, and image segmentation is performed before classification. The image segmentation is performed by a popular semantic segmentation method, U2Net, which accurately separates seeds from the background. The multiscale feature fusion network is a deep residual model based on a residual network of 34 layers (ResNet34), and it contains a multiscale feature fusion module and an attention mechanism. The multiscale feature fusion module is designed to extract features of different scales of needle-shaped seeds, while the attention mechanism is used to improve the ability to select features of our model so that the model can pay more attention to the key features. The results show that the average accuracy and average F1-score of the multiscale feature fusion deep residual network on the test set are 93.81% and 94.44%, respectively, and the numbers of floating-point operations per second (FLOPs) and parameters are 5.95 G and 6.15 M, respectively. Compared to other deep residual networks, the multiscale feature fusion deep residual network achieves the highest classification accuracy. Therefore, the network proposed in this paper can classify needle-shaped seeds efficiently and provide a reference for seed recognition in agriculture.

  • Research Article
  • Cite Count Icon 14
  • 10.1016/j.compbiomed.2023.106735
MFEFNet: Multi-scale feature enhancement and Fusion Network for polyp segmentation
  • Mar 2, 2023
  • Computers in Biology and Medicine
  • Yang Xia + 2 more

MFEFNet: Multi-scale feature enhancement and Fusion Network for polyp segmentation

  • Research Article
  • 10.3390/electronics13173501
ECF-Net: Enhanced, Channel-Based, Multi-Scale Feature Fusion Network for COVID-19 Image Segmentation
  • Sep 3, 2024
  • Electronics
  • Zhengjie Ji + 6 more

Accurate segmentation of COVID-19 lesion regions in lung CT images aids physicians in analyzing and diagnosing patients’ conditions. However, the varying morphology and blurred contours of these regions make this task complex and challenging. Existing methods utilizing Transformer architecture lack attention to local features, leading to the loss of detailed information in tiny lesion regions. To address these issues, we propose a multi-scale feature fusion network, ECF-Net, based on channel enhancement. Specifically, we leverage the learning capabilities of both CNN and Transformer architectures to design parallel channel extraction blocks in three different ways, effectively capturing diverse lesion features. Additionally, to minimize irrelevant information in the high-dimensional feature space and focus the network on useful and critical information, we develop adaptive feature generation blocks. Lastly, a bidirectional pyramid-structured feature fusion approach is introduced to integrate features at different levels, enhancing the diversity of feature representations and improving segmentation accuracy for lesions of various scales. The proposed method is tested on four COVID-19 datasets, demonstrating mIoU values of 84.36%, 87.15%, 83.73%, and 75.58%, respectively, outperforming several current state-of-the-art methods and exhibiting excellent segmentation performance. These findings provide robust technical support for medical image segmentation in clinical practice.

  • Research Article
  • Cite Count Icon 16
  • 10.3390/s22103935
3D Object Detection Based on Attention and Multi-Scale Feature Fusion.
  • May 23, 2022
  • Sensors
  • Minghui Liu + 4 more

Three-dimensional object detection in the point cloud can provide more accurate object data for autonomous driving. In this paper, we propose a method named MA-MFFC that uses an attention mechanism and a multi-scale feature fusion network with ConvNeXt module to improve the accuracy of object detection. The multi-attention (MA) module contains point-channel attention and voxel attention, which are used in voxelization and 3D backbone. By considering the point-wise and channel-wise, the attention mechanism enhances the information of key points in voxels, suppresses background point clouds in voxelization, and improves the robustness of the network. The voxel attention module is used in the 3D backbone to obtain more robust and discriminative voxel features. The MFFC module contains the multi-scale feature fusion network and the ConvNeXt module; the multi-scale feature fusion network can extract rich feature information and improve the detection accuracy, and the convolutional layer is replaced with the ConvNeXt module to enhance the feature extraction capability of the network. The experimental results show that the average accuracy is 64.60% for pedestrians and 80.92% for cyclists on the KITTI dataset, which is 1.33% and 2.1% higher, respectively, compared with the baseline network, enabling more accurate detection and localization of more difficult objects.

  • Research Article
  • Cite Count Icon 15
  • 10.1016/j.cmpb.2022.106891
DSGMFFN: Deepest semantically guided multi-scale feature fusion network for automated lesion segmentation in ABUS images
  • May 14, 2022
  • Computer Methods and Programs in Biomedicine
  • Zhanyi Cheng + 5 more

DSGMFFN: Deepest semantically guided multi-scale feature fusion network for automated lesion segmentation in ABUS images

  • Research Article
  • Cite Count Icon 4
  • 10.1002/mp.17385
Attention-enhanced multiscale feature fusion network for pancreas and tumor segmentation.
  • Sep 22, 2024
  • Medical physics
  • Kaiqi Dong + 8 more

Accurate pancreas and pancreatic tumor segmentation from abdominal scans is crucial for diagnosing and treating pancreatic diseases. Automated and reliable segmentation algorithms are highly desirable in both clinical practice andresearch. Segmenting the pancreas and tumors is challenging due to their low contrast, irregular morphologies, and variable anatomical locations. Additionally, the substantial difference in size between the pancreas and small tumors makes this task difficult. This paper proposes an attention-enhanced multiscale feature fusion network (AMFF-Net) to address these issues via 3D attention and multiscale context fusionmethods. First, to prevent missed segmentation of tumors, we design the residual depthwise attention modules (RDAMs) to extract global features by expanding receptive fields of shallow layers in the encoder. Second, hybrid transformer modules (HTMs) are proposed to model deep semantic features and suppress irrelevant regions while highlighting critical anatomical characteristics. Additionally, the multiscale feature fusion module (MFFM) fuses adjacent top and bottom scale semantic features to address the size imbalanceissue. The proposed AMFF-Net was evaluated on the public MSD dataset, achieving 82.12% DSC for pancreas and 57.00% for tumors. It also demonstrated effective segmentation performance on the NIH and private datasets, outperforming previous State-Of-The-Art (SOTA) methods. Ablation studies verify the effectiveness of RDAMs, HTMs, andMFFM. We propose an effective deep learning network for pancreas and tumor segmentation from abdominal CT scans. The proposed modules can better leverage global dependencies and semantic information and achieve significantly higher accuracy than the previous SOTA methods.

  • Research Article
  • Cite Count Icon 27
  • 10.1016/j.sigpro.2017.12.017
Compressed multi-scale feature fusion network for single image super-resolution
  • Dec 27, 2017
  • Signal Processing
  • Xinxia Fan + 4 more

Compressed multi-scale feature fusion network for single image super-resolution

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 12
  • 10.3390/drones8050186
MFEFNet: A Multi-Scale Feature Information Extraction and Fusion Network for Multi-Scale Object Detection in UAV Aerial Images
  • May 8, 2024
  • Drones
  • Liming Zhou + 11 more

Unmanned aerial vehicles (UAVs) are now widely used in many fields. Due to the randomness of UAV flight height and shooting angle, UAV images usually have the following characteristics: many small objects, large changes in object scale, and complex background. Therefore, object detection in UAV aerial images is a very challenging task. To address the challenges posed by these characteristics, this paper proposes a novel UAV image object detection method based on global feature aggregation and context feature extraction named the multi-scale feature information extraction and fusion network (MFEFNet). Specifically, first of all, to extract the feature information of objects more effectively from complex backgrounds, we propose an efficient spatial information extraction (SIEM) module, which combines residual connection to build long-distance feature dependencies and effectively extracts the most useful feature information by building contextual feature relations around objects. Secondly, to improve the feature fusion efficiency and reduce the burden brought by redundant feature fusion networks, we propose a global aggregation progressive feature fusion network (GAFN). This network adopts a three-level adaptive feature fusion method, which can adaptively fuse multi-scale features according to the importance of different feature layers and reduce unnecessary intermediate redundant features by utilizing the adaptive feature fusion module (AFFM). Furthermore, we use the MPDIoU loss function as the bounding-box regression loss function, which not only enhances model robustness to noise but also simplifies the calculation process and improves the final detection efficiency. Finally, the proposed MFEFNet was tested on VisDrone and UAVDT datasets, and the mAP0.5 value increased by 2.7% and 2.2%, respectively.

  • Research Article
  • 10.1145/3736768
Echo Depth Estimation via Attention-based Hierarchical Multi-scale Feature Fusion Network
  • Aug 12, 2025
  • ACM Transactions on Multimedia Computing, Communications, and Applications
  • Wenjie Zhang + 6 more

In environments where vision-based depth estimation systems, such as those utilizing infrared or imaging technologies, encounter limitations—particularly in low-light conditions—alternative approaches become essential. Echo depth estimation emerges as a compelling solution by leveraging the time delay of echoes to map the geometric structure of the surrounding environment. This method offers distinct advantages in specific scenarios, providing reliable data for accurate scene understanding and 3D reconstruction. Traditional echo depth estimation techniques primarily depend on spatial information captured by the encoder and depth predictions made by the decoder. However, these methods often fail to fully exploit the rich depth features present at different simultaneous frequencies. To address this challenge, we propose an echo depth estimation method via Attention-based Hierarchical Multi-scale Feature Fusion Network (AHMF-Net). This network is designed to extract spatial depth information from echo spectrograms across multiple scales and hierarchical levels, while fusing the most relevant information using an attention mechanism. AHMF-Net introduces two key modules in hierarchical levels: the Intra-layer Multi-scale Attention Feature Fusion (IMAF) module, which functions as the encoder to capture multi-scale features across varying granularities, and the Inter-layer Multi-Scale Detail Feature Fusion (IMDF) module, which integrates features from all encoding layers into the decoder to enable effective inter-layer multi-scale fusion. Additionally, the encoder incorporates an attention mechanism that enhances depth-related features by capturing channel dependencies at multiple scales. We evaluated AHMF-Net on the Replica, Matterport3D, and BatVision datasets, where it consistently outperformed state-of-the-art models in echo-based depth estimation, demonstrating superior accuracy and robustness. The source code is publicly available at https://github.com/wjzhang-ai/AHMF-Net .

  • PDF Download Icon
  • Research Article
  • 10.2478/amns.2023.2.00287
Analysis of artistic features of ancient Chinese literary works based on multi-scale feature fusion network
  • Sep 4, 2023
  • Applied Mathematics and Nonlinear Sciences
  • Yanan Zhang

Against the background of the rapid development of artificial intelligence technology, the use of artificial intelligence technology to improve the efficiency of artistic feature analysis of ancient Chinese literary works has become a hot topic of current research in literature. In this paper, we propose a multi-scale feature fusion network for the artistic feature analysis of ancient Chinese literary works to address the problems of a single structure and inflexible adaptation of features that appear in RPN networks and path aggregation networks. Then, features are extracted from ancient Chinese literary works, and several adaptive multi-scale feature extraction modules are used to squeeze the incentive and adaptive gating mechanisms for artistic feature extraction and fusion. Finally, the evaluation index system of artistic features of literary works is constructed, and the multi-scale feature fusion network weights the artistic features of ancient Chinese literature. The results show that the average weights of humanistic spiritual features, national patriotic features, literary emotional features and transformative artistic features in ancient Chinese literature and art features are 26.11%, 24.97%, 23.89% and 25.03%, respectively, and the average weight of humanistic spiritual features performs better compared with the other three. This study analyzes the artistic characteristics and cultural values of ancient literary works, which is an effective initiative for modern people to study ancient culture and inherit the national spirit and has important historical significance for developing Chinese literature.

More from: Journal of Building Engineering
  • New
  • Research Article
  • 10.1016/j.jobe.2025.114067
Multi-scale reinforcing effect of steel-PVA fibers and carbon nanotubes on fully recycled aggregate concrete: Mechanical properties and microstructures
  • Nov 1, 2025
  • Journal of Building Engineering
  • Junhui Zhang + 4 more

  • New
  • Research Article
  • 10.1016/j.jobe.2025.114480
Experimental and theoretical investigation on axial compression of seawater sea sand rubberised concrete filled pultruded GFRP tubes
  • Nov 1, 2025
  • Journal of Building Engineering
  • Chitransh Shrivastava + 4 more

  • New
  • Research Article
  • 10.1016/j.jobe.2025.114560
Axial tensile performance of grouted connection in staggered stacked steel modular structures: Experimental, numerical and theoretical analysis
  • Nov 1, 2025
  • Journal of Building Engineering
  • Xiaoguang Wang + 5 more

  • New
  • Research Article
  • 10.1016/j.jobe.2025.114151
Dual-trigger microcapsules for autonomous crack healing in marine concrete: Synergistic response to mechanical stress and chloride ions
  • Nov 1, 2025
  • Journal of Building Engineering
  • Zhenxing Du + 6 more

  • New
  • Research Article
  • 10.1016/j.jobe.2025.114196
Ultra-thin FeBTC/PI/porous ceramic composites with a structural coupling strategy for noise absorption
  • Nov 1, 2025
  • Journal of Building Engineering
  • Chao He + 5 more

  • New
  • Research Article
  • 10.1016/j.jobe.2025.114385
Experimental study on the mesoscopic damage characteristics of limestone subjected to thermal shock
  • Nov 1, 2025
  • Journal of Building Engineering
  • Yunsheng Dong + 5 more

  • New
  • Research Article
  • 10.1016/j.jobe.2025.114528
Corrosion initiation and long-term corrosion behavior of weathering steel with rare earth (RE) addition in marine atmosphere
  • Nov 1, 2025
  • Journal of Building Engineering
  • Yu-Zhou Wang + 3 more

  • New
  • Research Article
  • 10.1016/j.jobe.2025.114332
Effect of activator-to-precursor ratio on the mechanical and durability performance of rice husk ash-based alkali-activated concrete composites using recycled aggregates
  • Nov 1, 2025
  • Journal of Building Engineering
  • S Tejas + 1 more

  • New
  • Research Article
  • 10.1016/j.jobe.2025.114174
Wind-snow coupling effect on single-layer cylindrical reticulated shell stability
  • Nov 1, 2025
  • Journal of Building Engineering
  • Haiyan Yu + 4 more

  • New
  • Research Article
  • 10.1016/j.jobe.2025.114326
Study on the influence and mechanism of magnesium ionic solution on the dissolution behavior of tricalcium silicate (C3S)
  • Nov 1, 2025
  • Journal of Building Engineering
  • Liguo Wang + 7 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon