Related Topics
Articles published on Terrain classification
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
583 Search results
Sort by Recency
- New
- Research Article
- 10.1016/j.jterra.2025.101098
- Jan 1, 2026
- Journal of Terramechanics
- Tamiru Tesfaye Gemechu + 7 more
Real-time terrain classification for all-wheel drive robotic electric tractors using multi-sensor fusion and machine learning
- New
- Research Article
- 10.18586/msufbd.1726149
- Dec 24, 2025
- Muş Alparslan Üniversitesi Fen Bilimleri Dergisi
- Ahmet Faruk Pala + 2 more
Accurate classification of geological structures on the Martian surface is of critical importance for advancing planetary science research and developing autonomous exploration systems. In this study, a deep learning–based approach is proposed to classify images of eight different Martian geological structures, namely Other, Slope Streak, Spider, Swiss Cheese, Bright Dune, Crater, Dark Dune, and Impact Ejecta. The Mars Terrain Classification dataset obtained from the Kaggle platform is utilized, and a transfer learning model built upon the EfficientNetB4 architecture is developed. To enhance the model performance, various data preprocessing and data augmentation techniques are applied. Furthermore, Grad-CAM (Gradient-weighted Class Activation Mapping)–based visualization methods are employed to improve the transparency and interpretability of the model’s decision-making process. Experimental results demonstrate that the proposed model achieves high classification accuracy and enables reliable identification of geological structures through explainability analyses. The findings indicate that deep learning models that are both data-efficient and interpretable can provide significant contributions to Martian surface classification, addressing an important gap in the existing literature.
- Research Article
- 10.1080/2150704x.2025.2591636
- Nov 26, 2025
- Remote Sensing Letters
- Douglas Bazo De Castro + 4 more
ABSTRACT Evaporite mapping in hyperarid regions is crucial for understanding surface hydrology, salt crust dynamics, and environmental change. Terrain classification in these settings is hindered by spectral overlap among surface types and limited remote sensing coverage. To our knowledge, no previous workflow has combined Sentinel-1 radar data with Sentinel-2 indices for mapping evaporites in a Mars-analog environment. We developed a lightweight, rule-based fusion of horizontal transmit – horizontal receive (HH)-polarized Sentinel-1 backscatter with the Normalized Difference Water Index (NDWI) and Normalized Difference Snow Index (NDSI) from Sentinel-2, optimized for simplicity and low computational demand. Applied to a salt flat in northern Chile, moist salts accounted for ~67% and dry evaporites for ~5% of the area. The fused method achieved an overall agreement of more than 80% and F1-scores above 0.75, improving accuracy by 10–15% compared to spectral-only approaches. This sensor-independent framework supports efficient mapping of evaporitic surfaces in data-limited environments. It is directly transferable to planetary surface analysis, including analog studies and autonomous terrain triage during mission operations.
- Research Article
- 10.1002/rob.70097
- Nov 24, 2025
- Journal of Field Robotics
- Semih Beycimen + 2 more
ABSTRACT This paper presents advanced methodologies for real‐time terrain analysis and mapping in autonomous robotic systems. The focus is on appearance‐based terrain traversability analysis and geometric‐based terrain traceability analysis. In the appearance‐based approach, an enhanced segmentation model using pixel‐based augmentation and 13 unique classes is proposed for reliable terrain classification. Semantic images are projected onto a 2.5D map by transforming two‐dimensional image data into a three‐dimensional coordinate system. The geometric‐based approach involves depth estimation from stereo cameras, employing three Zed‐2 cameras and the Depth Sensing application programming interface. The research contributes to improved perception and decision‐making capabilities of autonomous robots operating in complex and dynamic environments and also provides a new comprehensive data set named CranfieldTerra. Experimental results validate the effectiveness of the proposed methodologies, demonstrating their potential in various applications, such as search and rescue, agriculture, and exploration. This study establishes a foundation for further advancements in autonomous robotics, enhancing their ability to navigate safely and efficiently in challenging terrains.
- Research Article
- 10.29227/im-2025-02-03-08
- Nov 5, 2025
- Inżynieria Mineralna
- Dragoş Andrei Gabriel + 6 more
Photogrammetry is a specialized technique essential for acquiring high - resolution 2D and 3D spatial data in a non - invasive manner. Its applications are particularly significant in terrain mapping and topographic analysis, where it serves as a complementary tool for geophysical and geological surveys. Additionally, photogrammetric data play a crucial role in monitoring landslides and erosion, particularly in valley - dominated landscapes, as exemplified in this study. Furthermore, photogrammetry is widely utilized in flood risk assessment and mitigation, as well as in infrastructure planning and land use management, contributing to more informed decision - making in environmental and engineering contexts. In this study, we conducted a survey over an 83,000 - square - meter area situated in Borviz Valley ( that serves as a tributary to the Olt River ) , within Bodoc locality, in the historical region of Transylvania, Romania. The aerial photogrammetric data acquisition was carried out using a DJI Phantom 4 Pro V2.0 UAV system, supplemented by a Trimble R2 GNSS system and a network of nine ground control points (GCPs) to enhance geospatial accuracy. The dataset, consisting of 436 images, was processed using specialized photogrammetric software, such as Agisoft Metashape, along with various Topographic and GIS tools, including ESRI ArcGIS and Blue Marble Geographics Global Mapper. This processing workflow resulted in high - resolution 3D models and 2D maps, represented by a range of photogrammetric products, including Digital Elevation Models (DEMs), Digital Terrain Models (DTMs), classified point clouds, and orthophotos (orthomosaics). By generating classified elevation models that include only the terrain class while excluding vegetation, built structures, and other anthropogenic objects, we obtained a more detailed representation of the ground surface, allowing for a more accurate depiction of the valley’s morphological characteristics. Furthermore, the orthophoto (orthomosaic), produced by integrating all photographic images acquired during the photogrammetric survey, provides a highly precise geospatial reference. This dataset can serve as a valuable resource for future survey planning across various domains such as topography, civil engineering, general infrastructure, utility construction, etc.
- Research Article
- 10.1080/01431161.2025.2571234
- Oct 15, 2025
- International Journal of Remote Sensing
- A Rega + 1 more
ABSTRACT Accurate terrain classification using polarimetric synthetic aperture radar (PolSAR) imagery is important for several remote sensing applications. However, conventional models struggle to generalize across domains due to variations in sensor type, acquisition conditions, and geographic context. In this work, we propose a dual-stream vision transformer framework for unsupervised domain adaptation in PolSAR terrain classification. The architecture combines a SimPool-based global attention stream with a ResMLP local stream, enabling robust modelling of both global semantic context and local spectral – spatial structures. The two streams are fused via element-wise integration, and the model is trained using labelled source data without requiring target domain labels. The framework is evaluated on four benchmark PolSAR datasets over ten units of domain adaptation consisting of sensor, region, and combined shifts. The proposed model achieves better performance than state-of-the-art models consistently. Further, ablation and per class analyses further confirm its effectiveness and its capability to generalize. We establish a new state-of-the-art for domain adaptive PolSAR terrain classification and demonstrate the advantages of combining global and local modelling streams within a unified transformer architecture.
- Research Article
- 10.3390/s25196203
- Oct 7, 2025
- Sensors (Basel, Switzerland)
- Gabrielle Thibault + 2 more
Background/Objective: Understanding the training effect in high-level running is important for performance optimization and injury prevention. This includes awareness of how different running surface types (e.g., hard versus soft) may modify biomechanics. Recent studies have demonstrated that deep learning algorithms, such as convolutional neural networks (CNNs), can accurately classify human activity collected via body-worn sensors. To date, no study has assessed optimal signal type, sensor location, and model architecture to classify running surfaces. This study aimed to determine which combination of signal type, sensor location, and CNN architecture would yield the highest accuracy in classifying grass and asphalt surfaces using inertial measurement unit (IMU) sensors. Methods: Running data were collected from forty participants (27.4 years + 7.8 SD, 10.5 ± 7.3 SD years of running) with a full-body IMU system (head, sternum, pelvis, upper legs, lower legs, feet, and arms) on grass and asphalt outdoor surfaces. Performance (accuracy) for signal type (acceleration and angular velocity), sensor configuration (full body, lower body, pelvis, and feet), and CNN model architecture was tested for this specific task. Moreover, the effect of preprocessing steps (separating into running cycles and amplitude normalization) and two different data splitting protocols (leave-n-subject-out and subject-dependent split) was evaluated. Results: In general, acceleration signals improved classification results compared to angular velocity (3.8%). Moreover, the foot sensor configuration had the best performance-to-number of sensor ratio (95.5% accuracy). Finally, separating trials into gait cycles and not normalizing the raw signals improved accuracy by approximately 28%. Conclusion: This analysis sheds light on the important parameters to consider when developing machine learning classifiers in the human activity recognition field. A surface classification tool could provide useful quantitative feedback to athletes and coaches in terms of running technique effort on varied terrain surfaces, improve training personalization, prevent injuries, and improve performance.
- Research Article
- 10.21014/actaimeko.v14i3.2067
- Sep 26, 2025
- Acta IMEKO
- Sebastiano Chiodini + 6 more
This study investigates the impact of data augmentation techniques on the accuracy and prediction probability of deep learning-based terrain classification systems for Unmanned Ground Vehicles (UGVs) in unstructured environments. The challenge of limited datasets in such environments is addressed through the implementation and evaluation of various data augmentation methods, to enhance the accuracy and reliability of pixel-level terrain measurements. The methodology is based on the DeepLabv3+ neural network architecture for supervised learning, trained on a custom dataset collected from an outdoor environment. A systematic assessment of multiple augmentation strategies is conducted, including geometric transformations (cropping and mirroring), colour space modifications (HSV transformations), and noise injection (Gaussian noise addition). The performance of these techniques is quantified using standard metrics, such as classification accuracy and Intersection over Union (IoU), alongside an analysis of pixel-wise classification prediction probability. Results indicate that, while traditional metrics show modest improvements, the application of data augmentation significantly enhances the model's prediction probability in its measurements, particularly for critical terrain features, such as traversable paths. A detailed analysis of the prediction probability distribution is presented, showing a significant improvement in the model's confidence for correctly classified pixels. Specifically, when augmentation strategies are applied, the percentage of traversable terrain pixels classified with high confidence (> 99.7 % probability) significantly increased from 75 % to 85 %.
- Research Article
- 10.3389/fcomp.2025.1597143
- Aug 13, 2025
- Frontiers in Computer Science
- Omar Coser + 6 more
IntroductionWearable robotics for lower-limb assistance is increasingly investigated to enhance mobility in individuals with physical impairments and to augment performance in able-bodied users. A major challenge in this domain is the development of accurate and adaptive control systems that ensure seamless human-robot interaction across diverse terrains. While neural networks have recently shown promise in time-series analysis, no prior work has tackled the combined task of classifying ground conditions into five terrain classes and estimating high-level locomotion parameters such as ramp slope and stair height.MethodsThis study presents an experimental comparison of eight deep neural network architectures for terrain classification and locomotion parameter estimation. The models are trained on the publicly available CAMARGO 2021 dataset using inertial (IMU) and electromyographic (EMG) signals. Particular attention is given to evaluating the performance of IMU-only inputs versus combined IMU+EMG data, with an emphasis on cost-efficiency and sensor minimization. The tested architectures include LSTM, CNN, and hybrid CNN-LSTM models, among others. Model explainability is assessed via SHAP analysis to guide sensor selection.ResultsIMU-only configurations matched or outperformed those using both IMU and EMG, supporting a more efficient setup. The LSTM model, using only three IMU sensors, achieved high terrain classification accuracy (0.94 ± 0.04) and reliably estimated ramp slopes (1.95 ± 0.58°). The CNN-LSTM architecture demonstrated superior performance in stair height estimation, achieving a accuracy of 15.65 ± 7.40 mm. SHAP analysis confirmed that sensor reduction did not compromise model accuracy.DiscussionThe results highlight the feasibility of using lightweight, IMU-only setups for real-time terrain classification and locomotion parameter estimation. The proposed system achieves an inference time of ~2 ms, making it suitable for real-time wearable robotics applications. This study paves the way for more accessible and deployable solutions in assistive and augmentative lower-limb robotic systems. Code and models are publicly available at: [https://github.com/cosbidev/Human-Locomotion-Identification].
- Research Article
- 10.64559/jieeev2i1a2001
- Jul 31, 2025
- Insights in Electrical Electronics Engineering
- Erfan Sotoodeh Nia Korrani
Existing line-following robotic systems, such as Pololu boards, are constrained by static thresholding algorithms, limited adaptability to dynamic environments, and high costs. To address these limitations, this paper proposes a compact, low-cost printed circuit board (PCB) designed for robust line tracking in educational and industrial applications. The system integrates an array of QRE1113GR infrared (IR) sensors, an adaptive threshold-based signal processing algorithm, and an Arduino Nano microcontroller to achieve stability under variable lighting (100–500 lux) and uneven terrain. Key hardware innovations include a two-layer PCB layout with segregated analog and digital components to minimize noise, MOSFET-based motor drivers for efficient power distribution, and a voltage regulation circuit using AMS1117 and decoupling capacitors. Experimental validation demonstrates a 95%-line detection accuracy on white surfaces, a 50 ms sensor-to-motor response latency, and a 50% reduction in power consumption (120 mA at 5V) compared to commercial alternatives. The design achieves a material cost of USD 18 and dimensions of 60.2 mm × 24.7 mm, enabling portability for small-scale robotics. The primary contributions of this work are: 1) a dynamic thresholding algorithm empirically optimized for environmental adaptability, 2) a modular, open-source hardware architecture, and 3) a comparative analysis quantifying performance improvement over a fixed-threshold system. Future research will focus on integrating machine learning for real-time terrain classification and expanding wireless communication capabilities.
- Research Article
- 10.1177/09544070251349649
- Jul 25, 2025
- Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering
- Guoyu Lin + 3 more
Terrain classification is essential for accurately identifying the terrain and providing valuable information for controlling, planning, and navigation algorithms of wheeled vehicle. A novel terrain classification algorithm based on wheel-terrain interaction is proposed for wheeled vehicles in this paper. Unlike conventional terrain classification methods that rely on chassis acceleration signals, chassis gyroscope signals, motor current signals, images, or 3D points, the proposed approach utilizes wheel force to determine the type of terrain. Three classifiers which were one-dimensional Convolutional Neural Networks (1D-CNN), Long Short-Term Memory networks (LSTM), and Support Vector Machines (SVM) are employed to classify the terrain types. A measurement database was established using a vehicle test system equipped with a wheel force sensor. From this database, various datasets were constructed based on different wheel force, processing window sizes, and overlap times. Comparative tests were conducted across these datasets. The results indicate that the wheel forces F x and F z along with torque M y are more effective for terrain classification purposes. Furthermore, 1D-CNN demonstrated superior performance compared to LSTM and SVM in most datasets. Additionally, the experiments revealed that larger processing window sizes and overlap times tended to enhance classification accuracy, however careful consideration must be given in practice.
- Research Article
- 10.3390/rs17142477
- Jul 17, 2025
- Remote Sensing
- Antoni Jaszcz + 1 more
Surface, terrain, or even atmosphere analysis using images or their fragments is important due to the possibilities of further processing. In particular, attention is necessary for satellite and/or drone images. Analyzing image elements by classifying the given classes is important for obtaining information about space for autonomous systems, identifying landscape elements, or monitoring and maintaining the infrastructure and environment. Hence, in this paper, we propose a neural classifier architecture that analyzes different features by the parallel processing of information in the network and combines them with a feature fusion mechanism. The neural architecture model takes into account different types of features by extracting them by focusing on spatial, local patterns and multi-scale representation. In addition, the classifier is guided by an attention mechanism for focusing more on different channels, spatial information, and even feature pyramid mechanisms. Atrous convolutional operators were also used in such an architecture as better context feature extractors. The proposed classifier architecture is the main element of the modeled framework for satellite data analysis, which is based on the possibility of training depending on the client’s desire. The proposed methodology was evaluated on three publicly available classification datasets for remote sensing: satellite images, Visual Terrain Recognition, and USTC SmokeRS, where the proposed model achieved accuracy scores of 97.8%, 100.0%, and 92.4%, respectively. The obtained results indicate the effectiveness of the proposed attention mechanisms across different remote sensing challenges.
- Research Article
2
- 10.3390/act14070342
- Jul 9, 2025
- Actuators
- Sk Hasan + 1 more
This review provides a comprehensive analysis of recent advancements in lower limb exoskeleton systems, focusing on applications, control strategies, hardware architecture, sensing modalities, human-robot interaction, evaluation methods, and technical innovations. The study spans systems developed for gait rehabilitation, mobility assistance, terrain adaptation, pediatric use, and industrial support. Applications range from sit-to-stand transitions and post-stroke therapy to balance support and real-world navigation. Control approaches vary from traditional impedance and fuzzy logic models to advanced data-driven frameworks, including reinforcement learning, recurrent neural networks, and digital twin-based optimization. These controllers support personalized and adaptive interaction, enabling real-time intent recognition, torque modulation, and gait phase synchronization across different users and tasks. Hardware platforms include powered multi-degree-of-freedom exoskeletons, passive assistive devices, compliant joint systems, and pediatric-specific configurations. Innovations in actuator design, modular architecture, and lightweight materials support increased usability and energy efficiency. Sensor systems integrate EMG, EEG, IMU, vision, and force feedback, supporting multimodal perception for motion prediction, terrain classification, and user monitoring. Human–robot interaction strategies emphasize safe, intuitive, and cooperative engagement. Controllers are increasingly user-specific, leveraging biosignals and gait metrics to tailor assistance. Evaluation methodologies include simulation, phantom testing, and human–subject trials across clinical and real-world environments, with performance measured through joint tracking accuracy, stability indices, and functional mobility scores. Overall, the review highlights the field’s evolution toward intelligent, adaptable, and user-centered systems, offering promising solutions for rehabilitation, mobility enhancement, and assistive autonomy in diverse populations. Following a detailed review of current developments, strategic recommendations are made to enhance and evolve existing exoskeleton technologies.
- Research Article
- 10.3390/ai6070145
- Jul 3, 2025
- AI
- Martina Formichini + 1 more
Background: Deep convolutional neural networks (CNNs) have become widely popular for many imaging applications, and they have also been applied in various studies for monitoring and mapping areas of land. Nevertheless, most of these networks were designed to perform in different scenarios, such as autonomous driving and medical imaging. Methods: In this work, we focused on the usage of existing semantic networks applied to terrain segmentation. Even though several existing networks have been used to study land segmentation using transfer learning methodologies, a comparative analysis of how the underlying network architectures perform has not yet been conducted. Since this scenario is different from the one in which these networks were developed, featuring irregular shapes and an absence of models, not all of them can be correctly transferred to this domain. Results: Fifteen state-of-the-art neural networks were compared, and we found that, in addition to slight differences in performance, there were relevant differences in the numbers and types of outliers that were worth highlighting. Our results show that the best-performing models achieved a pixel-level class accuracy of 99.06%, with an F1-score of 72.94%, 71.5% Jaccard loss, and 88.43% recall. When investigating the outliers, we found that PSPNet, FCN, and ICNet were the most effective models. Conclusions: While most of this work was performed on an existing terrain dataset collected using aerial imagery, this approach remains valid for investigation of other datasets with more classes or richer geographical extensions. For example, a dataset composed of Copernicus images opens up new opportunities for large-scale terrain analysis.
- Research Article
- 10.3390/electronics14132681
- Jul 2, 2025
- Electronics
- Sanket Lokhande + 5 more
Quadruped robots have shown significant potential in disaster relief applications, where they have to navigate complex terrains for search and rescue or reconnaissance operations. However, their deployment is hindered by limited adaptability in highly uncertain environments, especially when relying solely on vision-based sensors like cameras or LiDAR, which are susceptible to occlusions, poor lighting, and environmental interference. To address these limitations, this paper proposes a novel sensor-enhanced hierarchical switching model predictive control (MPC) framework that integrates proprioceptive sensing with a bi-level hybrid dynamic model. Unlike existing methods that either rely on handcrafted controllers or deep learning-based control pipelines, our approach introduces three core innovations: (1) a situation-aware, bi-level hybrid dynamic modeling strategy that hierarchically combines single-body rigid dynamics with distributed multi-body dynamics for modeling agility and scalability; (2) a three-layer hybrid control framework, including a terrain-aware switching MPC layer, a distributed torque controller, and a fast PD control loop for enhanced robustness during contact transitions; and (3) a multi-IMU-based proprioceptive feedback mechanism for terrain classification and adaptive gait control under sensor-occluded or GPS-denied environments. Together, these components form a unified and computationally efficient control scheme that addresses practical challenges such as limited onboard processing, unstructured terrain, and environmental uncertainty. A series of experimental results demonstrate that the proposed method outperforms existing vision- and learning-based controllers in terms of stability, adaptability, and control efficiency during high-speed locomotion over irregular terrain.
- Research Article
- 10.1088/1742-6596/3055/1/012002
- Jul 1, 2025
- Journal of Physics: Conference Series
- Ailun Tang + 4 more
Abstract Unmanned aerial vehicle (UAV) landing area identification is a critical research topic in the UAV domain. Traditionally, UAV autonomous landing depends on recognizing cooperative target images on ground platforms, but identifying landable terrains without such targets is still a difficult task. This paper proposes a UAV landing area identification algorithm that combines Convolutional Neural Network (CNN) and binocular stereo matching. Firstly, the object-contextual representation (OCR) feature extraction module and HRNet perform terrain classification to obtain multi-scale contextual information and enhance pixel-semantic correlations. Then, based on binocular images and the Semi-global block matching (SGBM) algorithm, an improved Auto Semi-global block matching (ASGBM) algorithm is developed to evaluate the flatness of the landing area. The proposed terrain classification network achieves an average accuracy of 86.53% and a single-image prediction time of about 118 ms on the DLRSD dataset, meeting the real-time requirements. For challenging water area images tested on a self-built dataset, the classification accuracy reaches 99.46%. Moreover, the ASGBM algorithm and the overall landing area identification algorithm have a depth estimation error of less than 0.55% within 22 meters and a single-image processing time of approximately 326 ms when validated on the self-built dataset. The average site selection accuracy of the latter is 92.50%. Experimental results demonstrate that the proposed method can accurately select the optimal UAV landing areas in vertical zones.
- Research Article
- 10.1007/s00521-025-11314-2
- Jun 2, 2025
- Neural Computing and Applications
- Anurina Tarafdar + 4 more
A CNN-based framework for land use land cover classification of heterogeneous terrain using satellite images
- Research Article
4
- 10.1109/jbhi.2025.3536030
- Jun 1, 2025
- IEEE journal of biomedical and health informatics
- Chih-Lung Lin + 4 more
As the elderly population grows, falling accidents become more frequent, and the need for fall-risk monitoring systems increases. Deep learning models for fall-risk movement detection neglect the connections between the terrain and fall-hazard movements. This issue can result in false alarms, particularly when a person encounters changing terrain. This work introduces a novel multisensor system that integrates terrain perception sensors with an inertial measurement unit (IMU) to monitor fall-risk on diverse terrains. Additionally, a dual-task learning (DTL) architecture that is based on a modified CNNLSTM model is implemented; it is used to determine fall-risk level and the terrain from sensor signals. Three fall-risk levels - "normal," "near-fall," and "fall" - are identified as being associated with "flat ground," "stepping up," and "stepping down" terrains. Ten young subjects performed 16 activities on flat and stepping terrains in a laboratory setting, and ten elderly individuals were recruited to perform four activities in the hospital. The accuracies of classification of fall-risk levels and terrains by the proposed system are 97.6% and 95.2%, respectively. The system detects pre-impact fall movements, with a fall prediction accuracy of 97.7% and an average lead time of 329ms for fall trials, revealing the model's effectiveness. The overall monitoring accuracy for elderly individuals is 99.8%, confirming the robustness of the proposed system. This work discusses the impact of sensor type and the model architecture of DTL on the classification of fall-risk levels across various terrains. The results demonstrate that the proposed method is reliable for monitoring the risk of falling.
- Research Article
- 10.1016/j.eswa.2025.127495
- Jun 1, 2025
- Expert Systems with Applications
- Yaxin Li + 3 more
Terrain classification method based on fusion of vision and vehicle dynamics for UGV
- Research Article
- 10.71086/iajse/v12i1/iajse1208
- May 31, 2025
- International Academic Journal of Science and Engineering
- K.T Moh + 1 more
The ability of miniature robots to move efficiently across soft or deformable terrain remains a challenge, particularly when facing limits on power and control. To solve this problem, a robotic gait algorithm was developed to modify gait parameters in response to real-time feedback from the environment. The system combines a hierarchical command system with machine-learning terrain classification to move with precision and minimal power on unstable surfaces like sand, soil, and gravel. A visual perception component uses BoW representations and SVM to classify the terrain before contact, enabling prior strategy formulation and improving performance control of gait adaptation. Stridelengh, advancing joint torque, and foot movement through inverse kinematics are implemented alongside terrain cost mapping to balance slippage. System performance evaluation through simulations and real-world field experiments validates the algorithm's ability to enhance locomotion versatility and precision. Enhanced miniature robotic mobility critical in disaster response, planetary exploration, and environmental surveillance is made possible by this solution.